Response To (Scott Alexander): Bounded Distrust

Would that it were that simple.

There is a true and important core idea at the center of Bounded Distrust.

You can (and if you are wise, you often do) have an individual, institution or other information source that you absolutely do not trust to reliably tell the truth or follow its own explicit rules. Yet by knowing the implicit rules, and knowing the incentives in place and what the consequences would be if the source was lying to various extents, you can still extract much useful information.

Knowing what information you can and can’t extract, and what claims you can trust from what sources in what contexts to what extent, is a vital life skill.

It is also a difficult and often an anti-inductive skill. Where there is trust, there is the temptation to abuse that trust. Each person has a unique set of personal experiences in the world and samples a different set of information from various sources, which then evolves one’s physical world models and one’s estimates of trustworthiness in path dependent ways. Making real efforts from your unique epistemic perspective, will result in a unique set of heuristics for who you can and cannot trust. 

Perspectives can become less unique when people decide to merge perspectives, either because they trust each other and can trade information on that basis, or because people are conforming and/or responding to social pressure. In extreme cases large groups adopt an authority’s stated heuristics wholesale, which that authority may or may not also share.   

Scott’s model and my own have much in common here, but also clearly have strong disagreements on how to decide what can and cannot be trusted. A lot of what my weekly Covid posts are about is figuring out how much trust we can place where, and how to react when trust has been lost.

This all seems worth exploring more explicitly than usual.

I’ll start with the parts of the model where we agree, in list form.

  1. None of our institutions can be trusted to always tell the truth.
  2. However, there are still rules and associated lines of behavior.
  3. Different rules have different costs associated with breaking them.
  4. These costs vary depending on the details, and who is breaking what rule.
  5. This cost can, in some cases, be so high as to be existential.
  6. In some situations, this cost is low enough that statements cannot be trusted.
  7. In some situations, this cost is high enough that statements can be trusted.
  8. Often there will be an implicit conspiracy to suppress true information and beliefs, but the participants will avoid claiming the information is false.
  9. Often there will be an implicit conspiracy to spread false information and beliefs, but the participants will avoid explicitly claiming false information.
  10. This conspicuous lack of direct statements is often very strong evidence.
  11. The use of ‘no evidence’ and its synonyms is also strong evidence.
  12. There will sometimes be ‘bounded lying’ where the situation is painted as different than it is but only by a predictable fixed amount. If you know the rules, you can use this to approximate the true situation.

The difference is that Scott seems to think that the government, media and other authority figures continue mostly to play by a version of these rules that I believe they mostly used to follow. He doesn’t draw any parallels to the past, but his version of bounded distrust reads like something one might plausibly believe in 2015, and which I believe was largely the case in 1995. I am confused about how old the old rules are, and which ones would still have held mostly true in (for example) 1895 or in Ancient Rome.

Whereas in 2022, after everything that has happened with the pandemic and also otherwise, I strongly believe that the trust and epistemic commons that existed previously have been burned down. The price of breaking the old rules is lower, but it is more than that. The price of being viewed as actually following the old rules is higher than the cost of not following them, in addition to the local benefits of breaking the old rules. Thus the old rules mostly are not followed.

The new rules are different. They still bear some similarities to the old rules. One of the new rules is to pretend (to pretend?) to be following the old rules, which helps. The new rules are much less about tracking physical truth and much more about tracking narrative truth.

It seems useful to go through Scott’s examples and some obvious variants of them as intuition pumps, but first seems worth introducing the concept of the One Time, and also being clear about what this post doesn’t discuss due to time and length constraints, since that stuff is very important.

Bounded Discussion (Or: Why Can Wait)

This is the long version of this post due to lack of time to write a shorter one. I do hope at some point to write a shorter version.

Thus, this post is already extremely long, so it doesn’t have additional space to get much into my model of why much of this is the case.

Here are some of the things I am conspicuously excluding due to length and time.

I’m excluding all discussion of simulacra levels, and all discussion of moral mazes or even motive ambiguity, or the dynamics of implicit conspiracies, despite them being important to the underlying dynamics. I’m excluding all the reasons why there is pressure to break the rules as much as possible and be seen to be doing so, and why this pressure is currently historically high and increasing over time along with pressures to visibly reverse all principles, values and morality.

Thus I’m not ‘putting it all together’ in an important sense. Not yet.

I’m excluding all discussion of what the Narrative actually is, how it gets constructed and decided upon, what causes it to update and to what extent it responds to changes in the physical world.

I’m excluding all discussion of why there exists an Incorrect Anti-Narrative Contrarian Cluster (ICC), or a Correct Contrarian Cluster (CCC), or how their dynamics work in terms of what gets included or excluded, or what pushes people towards or away from them.

I’m excluding most of the necessary discussion of how one evaluates a particular source to decide how bounded one’s distrust in that particular source should be and in which particular ways, and how I identify sources I can mostly or entirely trust or ones which are less trustworthy than their basic identify would suggest.

I’m excluding discussion about how to elicit truth from by-default untrustworthy sources, if given the opportunity to interact with them, which is often possible to varying degrees.

I’m excluding a bunch of synthesis that requires more careful simmering and making into its own shorter post.

I’m excluding a bunch of other things too. This is quite the rabbit hole to go down.

Now, on to the One Time.

One Time

There’s a lot of great tricks that only work once. They work because it’s a surprise, or because you’re spending a unique resource that can’t be replenished.

The name here comes from poker, where players jokingly refer to their ‘One Time’ to get lucky and hit their miracle card. Which is a way of saying, this is the one that counts.

That’s the idea of the One Time. This is the high leverage moment. It will give away your secret strategy, show your opponent their weakness, blow your credibility, use all your money, get you fired, wreck the house or cash in that big favor. The rules will adjust and it won’t work again.

Or alternatively, if it succeeds you get your One Time back, but damn it This Had Better Work or there will be hell to pay. You come at the king, you best not miss.

Maybe that’s acceptable. Worth It. Then do what you have to do.

That’s one way of thinking about the price one has to pay for breaking some of these rules. Would this be slight annoyance? Or would you be cashing in your One Time?

Shooting at Yankee Stadium

Scott frames this as you being a liberal and thus lacking trust in Fox as a source, but it’s important to note that this does not matter. Either Fox News is trustworthy in a given situation, or it is not. MSNBC is also either trustworthy in a given situation to a given degree, or it is not. Your views on social and economic policies, and which party you want in power to what degree, should not matter. The exception is if the reason you are on one side or the other is that you believe one side’s sources are more honest, but if that’s true then you’re a liberal because you don’t trust Fox News, rather than not trusting Fox News because you’re a liberal.

Anyway, this is his first example:

One day you’re at the airport, waiting for a plane, ambiently watching the TV at the gate. It’s FOX News, and they’re saying that a mass shooter just shot twenty people in Yankee Stadium. There’s live footage from the stadium with lots of people running and screaming.

Do you believe this?

Yes, of course I believe it. In fact it’s rather overdetermined. Why?

  1. Clear physical fact claims, specific details.
  2. If false, will be known to be false quickly and clearly.
  3. If caught getting this wrong, price would be high relative to stakes.
  4. Admission against interest (could also say poor product-market fit). 
  5. Live footage from the stadium.
  6. They are not in the habit of this type of lie.

The combination of these factors is very strong, and in the absence of counterevidence I would treat this as true with probability of essentially (1 minus epsilon).

I agree with Scott that deep fakes of live events are beyond the reasonable capabilities of Fox News or any other similar organization at this time. And also that even if they could do them, the price of getting caught doing so would be very high, even higher than the already high price of being seen getting this wrong. So the live footage alone makes me believe whatever I see on the live footage.

I would only doubt that if the stakes involved were somehow high enough that Fox News would plausibly be cashing in their One Time (e.g. if #3 was false, because the value at stake rivaled the potential price).

Note of course that the live footage doesn’t always mean what it looks like it means. It can and will be framed and edited to make it look like they want it to look, and anyone interviewed might be part of the production. It doesn’t automatically imply a mass shooting. But you can trust the literal evidence of your senses.

If it was Yankee Stadium without live footage, that lack of footage would be highly suspicious, because there should be cameras everywhere and Fox should be able to get access. I’d wonder what was up. But let’s say we move this to a place without such cameras, so it’s not suspicious, or otherwise we don’t have footage that actually proves that the shootings happened (and for whatever reason it’s not suspicious that we lack this). Are we still good?

Yeah, we’re still good. It’s still reporting physical facts with specific details, in a way that if anything goes directly against Fox’s vested interests. There’s no reason to lie.

What if in addition to removing the live footage, it was MSNBC or CNN instead, so there was a clear reason to claim there was a mass shooting but the situation is otherwise unchanged?

Now I notice that this is a sufficiently combination of missing factors that I’m moving from numbers like (p = 1 – epsilon) to something more like p~0.95. They could make a mistake here, they have reason to make a mistake here, they’re in the habit of calling things mass shootings whenever possible. The price for getting this wrong isn’t zero, but the mainstream media is good at memory holing its ‘mistakes’ of this type and isn’t trying to be super reliable anymore.

They are in the habit of this kind of lie, of finding ways to claim there are lots of mass shootings all the time, and characterizing everything they can as a mass shooting, so #6 also does not apply, although there would still be something that their source was claiming had happened – they wouldn’t as of yet be willing to use this label if none of their sources were saying bullets were involved or that anyone had come to harm. 

It’s probably still a mass shooting, but if my life depends on that being true, I’m going to double check.

Scott’s next hypothetical:

Fox is saying that police have apprehended a suspect, a Saudi immigrant named Abdullah Abdul. They show footage from a press conference where the police are talking about this. Do you believe them?

Once again, yes, of course. This is no longer an admission against interest, but I notice this is an actual red line that won’t be crossed. The police either apprehended a suspect named Abdullah Abdul from Saudi Arabia or they didn’t, this can be easily verified, and there will be a price very much not worth paying if this is claimed in error. There is a strong habit of not engaging in false statements of this type.

If this was more speculative, they would use particular weasel words like ‘believed to (be/have)’ at which point all bets aren’t quite off but the evidence is not very strong. If the weasel words aren’t there, there’s a reason.

However, I don’t agree with this, and even more don’t agree with the sign-reversed version of it (e.g. flop MSNBC for FOX and reverse all the facts/motivations accordingly):

It doesn’t matter at all that FOX is biased. You could argue that “FOX wants to fan fear of Islamic terrorism, so it’s in their self-interest to make up cases of Islamic terrorism that don’t exist”. Or “FOX is against gun control, so if it was a white gun owner who did this shooting they would want to change the identity so it sounded like a Saudi terrorist”. But those sound like crazy conspiracy theories. Even FOX’s worst enemies don’t accuse them of doing things like this.

This very much does not sound like a crazy conspiracy theory. It is not crazy. Also it would not be a conspiracy. It would be some people making some stuff up, only in locally noticeably more brazen ways than those previously observed, and which we thus think is unlikely. But if someone came into Scott’s office and said ‘I think FOX’s story today about that Saudi terrorist is importantly false’ then it would be a mistake to suggest therefore putting this person on medication or asking them to go to therapy.

Of course it matters that FOX is biased and would very much like to make up a case of Islamic terrorism. FOX makes up cases of Islamic terrorism, the same way that MSNBC shoves them under a rug. And my lord, FOX would totally love to change the identity so it sounded like a Saudi terrorist. Of course they would. And MSNBC would love to make it sound like it was a white gun owner.

Before the identity is known, MSNBC and friends will run stories that assume of course it is a white gun owner, while FOX and friends will run stories that assume of course it is an Islamic terrorist. And they will hold onto those assumptions until the last possible moment when it would be too embarrassing not to fold, in the hopes of leaving the right impression (for their purposes) with as many people as possible, and to signal their loyalty to their narrative model of the world. And then they will insist they didn’t say the things they previously said, and they will definitely insist they certainly haven’t repeated this pattern dozens of times.

The question is, does adding the detail of the police identifying the suspect sufficiently over-the-line that this is insufficient to make the actions in question plausible? With these details, my answer is yes in the central sense of there being an apprehended suspect named Abdullah Abdul from Saudi Arabia.

Whereas on MSNBC, they’re probably whistling and pretending not to notice this person’s name and origin because they’re suddenly not important, and having experts on saying things like ‘we have no idea what caused this incident, who can know, but we do know that there are so many more shootings here than any other country.’

Now flip it again, and suppose the suspect was a white gun owner. Fox will keep talking about the threat of Islamic terrorism and pretend not to notice the person was white, and probably expound upon the various FOX-friendly potential motivations and histories that could be involved long after they’re no longer remotely plausible.

Now imagine the person in question was both, and was a white person who happened to be born in Saudi Arabia, and whose name (whether or not it was given at birth) was Abdullah Abdul, and watch two completely disjoint sets of facts get mentioned.

But, you say. But! They still wouldn’t outright say the fully false things here. There are rules, you say. Everyone involved is distorting everything in sight but there’s still this big signpost where they say ‘the police have apprehended a suspect named X with characteristics Y’ and you know X is the suspect’s name, and you probably know they have characteristics Y depending on how slippery that could potentially be made.

And yes, you’re probably right. Last time I checked, they do have a red line there. But there’s a bunch of red lines I thought they had (and that I think previously they did have) that they’ve crossed lately, so how confident can we be?

Scott says this:

And there are other lines you don’t cross, or else you’ll be the center of a giant scandal and maybe get shut down. I don’t want to claim those lines are objectively reasonable. But we all know where they are. And so we all trust a report on FOX about a mass shooting, even if we hate FOX in general.

Scott links to Everybody Knows to indicate this is a ‘the savvy know this and then treat it like everyone knows.’ But the savvy are necessarily a subset, and not all that large a subset at that. Not only does everyone very much not know this, I don’t even know this.

I have a general sense of where those lines seem to be, but they seem to be different than where the lines were five years ago, which in turn is different from thirty years ago. I am not confident I have them located correctly. I am damn sure that very far from everybody knows even that much with any confidence, and that those who think they are damn sure often strongly disagree with each other.

I don’t expect this example to rise to anything like the level where FOX might get shut down and I’d expect it to be forgotten about within a few weeks except maybe for the occasional ‘remember when FOX did X’ on Twitter. They’ll claim they made a mistake and got it wrong and who are you to say different and why should we believe your biased opinion? That seems so much more likely to me than that this suddenly becomes a huge deal.

The reason I still believe FOX (or MSNBC in reverse) in this spot is because it’s still not something they’re in the habit of doing, and it’s still a dumb move strategically to choose this spot to move expectations in this way, in ways they can understand intuitively, and mostly that it feels like something that will feel to them like something they shouldn’t do. It doesn’t pattern match well enough to the places where outright lies have already happened recently. Right now. For now.

Yet, for all our explicit disagreements, I expect Scott in practice to be using almost the same heuristics I am using here if such events were to happen, with the difference being that I think Scott should be adjusting more for recent declines in deserved trust, and him likely thinking I’m adjusting too far.

Lincoln and Marx

I’m going to first deal with the Lincoln and Marx example, then with the 2020 election after, although Scott switches back and forth between them.

Here’s a Washington Post article saying that Abraham Lincoln was friends with Karl Marx and admired his socialist theories. It suggests that because of this, modern attacks on socialism are un-American.

Here is a counterargument that there’s no evidence Abraham Lincoln had the slightest idea who Karl Marx was.

I find the counterargument much more convincing. Sometimes both the argument and counterargument describe the same event, but the counterargument gives more context in a way that makes the original argument seem calculated to mislead. I challenge you to read both pieces without thinking the same.

A conservative might end up in the same position vis-a-vis the Washington Post as our hypothetical liberal and FOX News. They know it’s a biased source that often lies to them, but how often?

So both sides are often lying, but with some conditions under which a given statement can still be trusted. The question is what conditions still qualify.

So before looking at the counterargument, we can start with the easy observation that the headline is definitely at least a claim without evidence, which I would consider in context to be lying. Scott excuses this by saying that headline writers are distinct from article writers, and make stuff up, and everybody knows this and it’s fine. Anything in a headline that isn’t a tangible specific fact is complete rubbish.

The body of the article is a real piece of work. I didn’t need to see the counterargument to know it stinks, only to know exactly how much it stinks. It is doing the association dance, the same one used when someone needs to be canceled. Other than being about someone long dead, and that the author thinks socialism is good actually, this seems a lot like what The New York Times did to Scott Alexander, drawing the desired associations and implications by any means technically available, and because there was nothing there, being made of remarkably weak sauce.

Here Lincoln is ‘surrounded by’ a certain kind of person, and someone is that kind of person if they ‘made arguments’ that are of the type that a person of that point of view would make. I totally noticed that the argument that Lincoln was reading Marx was that he was a columnist in a newspaper Lincoln was reading, which is like saying I was as a child a reader of William Safire because I read the New York Times. The ‘exchanged letters’ thing where Lincoln wrote back a form letter I can’t say for sure I would have picked up on on my own, but I like to hope so. The clues are all there.

That’s the thing. The clues are all there. This is transparent obvious bullshit.

It’s still easy to not spot the transparent obvious bullshit. When one is reading casually or quickly, it’s a lot easier to do a non-literal reading that will effectively lie to you, than the literal reading that won’t. Not picking up on (or noticing in a conscious and explicit way that lets you reject them) the intended insinuations requires effort. And despite the overall vibe of the post being transparent enough to me that it would trigger a ‘only a literal reading of this will be anything but bullshit,’ it was less transparent to others – Scott said in a comment to a draft of this post that he’s not confident he would have sufficiently noticed if he’d seen only the original but not the rebuttal. 

The direct quotes of Lincoln here are interesting. They do have quite the echo to things Marx said. And to things many others of that era said who had very different beliefs. They also make perfect sense if you interpret them as ‘you should want the slaves to be freed,’ which is the obvious presumed context when I read them, and which was then confirmed by the context later provided by the counterargument. Which also seems to include such lines as:

“Capital,” Lincoln explained, “has its rights, which are as worthy of protection as any other rights.”

They also are missing the thing that makes a socialist a socialist, which is to declare that we should find the people with the stuff, point guns at them, and take their stuff. It doesn’t even quote him saying this about slaves, and he’s the one who freed the slaves, so it seems like a strange omission. In this type of agenda-pushing, one can safely assume that if there was better material available it would have been used.

The counterargument misunderstands what is going on here.

Brockell badly misreads her sources and reaches faulty conclusions about the relationship between the two historical contemporaries. Contrary to her assertion, there is no evidence that Lincoln ever read or absorbed Marx’s economic theories. In fact, it’s unlikely that Lincoln even knew who Karl Marx was, as distinct from the thousands of well-wishers who sent him congratulatory notes after his reelection.

There’s the fact that technically no one said Lincoln read Marx’s economic theories but that’s not the point here. Brockell did not misread anything. Brockell looked for words that could be written to give an impression Brockell wished to convey while not crossing the red line of saying definitively false things of the wrong type, and Brockell found the best such words that could be found. There are no ‘faulty conclusions’ here, there are only implausible insinuations.

Anyway, yes, the rebuttal is deeply convincing, and the fact that the original made it into the Washington Post should be deeply embarrassing. Yet it was not. Scott notes that it was not, that everyone forgot about it. Scott seemingly thinks not only that the Washington Post will pay zero price for doing this, but that this was entirely predictable. As a ‘human interest’ story, in his model, no one is checking for such obvious hackery or caring about it, it’s par for the course, you should expect to see ‘we can’t know for sure that wet ground causes rain, but we do know that there’s a strong correlation, and where wet ground you can usually look up and see the rain coming down’ and who cares, it’s not like it matters whether the rain caused the wet ground or the other way around, that’s a human interest story.

There’s also the question of whether this story is lying or not. Scott seems to be trying to have it both ways.

First off, there’s the common sense attitude that the Marx/Lincoln article is of course lying. But the claim is then that this is because the questions in it are not to be taken seriously, and trust only matters when questions are sufficiently serious.

Then there’s the thing where the article didn’t technically lie aside from the headline. Which is true.

It’s hard for a naïve person to read the article without falsely concluding that Marx and Lincoln were friends. But the article does mostly stick to statements which are literally true.

I don’t think it’s mostly? I think the statements are each literally true. It’s more like it’s full of insinuation and non-sequiturs. This paragraph, for example, is all completely true aside from the questionable ‘was surrounded by socialists’ but also is also completely obvious nonsense. It gives the impression that conclusions should be drawn without actually justifying those conclusions at all, which is classic.

President Trump has added a new arrow in his quiver of attacks as of late, charging that a vote for “any Democrat” in the next election “is a vote for the rise of radical socialism” and that Rep. Alexandria Ocasio-Cortez (D-N.Y.) and other congresswomen of color are “a bunch of communists.” Yet the first Republican president, for whom Trump has expressed admiration, was surrounded by socialists and looked to them for counsel.

What are the potential outright falsehoods?

There’s that line about ‘surrounded by socialists’ above. The only evidence given is that there are were a few people around Lincoln who expressed some socialist ideas, and who encouraged him to free the slaves. That doesn’t seem like it clears the bar on either ‘socialist’ or ‘surrounded.’ There are two socialists referenced, one of whom ran a Republican newspaper, supported him, and then investigated generals on his behalf, none of which has much to do with socialism. It’s no surprise that Lincoln ‘eagerly awaited’ dispatches about his generals, since his generals were one of his biggest issues. The other also ran a newspaper. It’s almost as if someone who wanted to run for office decided to become friends with the people who had access to printing presses. Smart guy, that Lincoln.

And there’s a bunch of statements like this. They seem more right than wrong, but not quite wrong enough to be lies.

There’s this:

If you think that sounds like something Karl Marx would write, well, that might be because Lincoln was regularly reading Karl Marx.

This is highly misleading in the sense that ‘regularly reading Karl Marx’ refers to his Crimea War dispatches in a newspaper, which he in turn might or might not have been doing, but technically that still counts. The question is whether the logical implication here counts as lying, since if you know the details it’s obvious that this could not have been why Lincoln wrote what he wrote.

Scott claims ‘the Marx article got minimal scrutiny’ but it manages to very carefully follow the correct exact pattern, and predictably got a bunch of scrutiny afterwards. I don’t buy it.

So my conclusion is that the article is intentionally misleading, a piece of propaganda designed to be obviously bullshitting in order to push a political agenda and make it clear you are willing to engage in obvious bullshit to support a political agenda. 

But it’s bullshit, and isn’t lying, except for the headline. It follows The Rules, the Newspaperman’s Code that says that you can’t print known-to-be-technically-false things.

Human Interests

That leads to me getting confused by this.

Finally, the Marx thing was intended as a cutesy human interest story (albeit one with an obvious political motive) and everybody knows cutesy human interest stories are always false.

It could be reasonably said that everybody knows cutesy human interest stories are warped narratives at best and often centrally false, designed to give the desired impression and support the desired narrative. The post about rescuing that cat stuck in a tree is either going to talk about the dark underbelly of shady cat rescuers or else it’s going to be a heartwarming story about how a cute child got their kitty back. What it isn’t going to be is fair and balanced.

You can call this a ‘cutesy human interest story’ if you come from a background where being socialist is obviously great, but even then I don’t buy it because the purpose of this is to be used as ammunition in within-ingroup arguments to try and show one’s adherence to party lines. It’s not to try and convince any outgroup members because, as Dan Quayle famously put it and is quoted later in Scott’s post, no one was fooled.

Such people gave The Washington Post clicks, as did Scott here. Author showed their loyalties and ability to produce viral content of similar nature. Missions accomplished.

But the question I have is: What makes the rules observed here different from the rules elsewhere?

My answer to that is nothing. The rules are the same.

This is exactly the level of misleading one should expect, at a minimum, on a ‘how and in which way do Very Serious People want me to be worried this week about Covid-19.’ Or on a post about how an election (was / was not) stolen. This is exactly the level of misleading I expect any time there is a narrative and an interest in pushing that narrative.

In fact, I’d call this an excellent example of where the line used to be. The line used to be exactly here. You could do this. You couldn’t do more.

The difference is that people are increasingly doing somewhat more than this. That’s why we had to go through the steps earlier with the hypothetical shootings at Yankee Stadium. If 2012-media from any side tells me there’s a mass shooting at Yankee Stadium, I believe them, full stop, we don’t need the other supports. That’s specific enough. Today, it’s not enough, and we need to stop and think about secondary features.

It is often said that if you read an article in a newspaper about the field you know best it will make statements that are about as accurate as ‘wet ground causes rain,’ and you should then consider that maybe this isn’t unique to the field you know best. That certainly matches my experience, and that’s when there isn’t an obvious narrative agenda involved. When there is, it’s a lot worse.

Scott’s attempt to draw the distinction that expert historians specifically into Marx and Lincoln are not known to be saying nice things about this article feels like ad hoc special pleading, a kind of motte/bailey on what contextually counts as an expert. It also isn’t relevant, because ‘praise’ is not vouching even for its not-outright-lying status let alone its not-lying-by-implication status. Under the model, ‘praise’ is unprincipled, cannot be falsified, and thus doesn’t imply what Scott is suggesting it does, and mostly is only evidence of what is in the Narrative.

Scott notices that he never expected any of this to check out under scrutiny, because stories like this are never true, and certainly there were overdetermined contextual clues to allow that sort of conclusion even before the takedown. With the takedown, it’s trivial.

The 2020 Election

A conservative might end up in the same position vis-à-vis the Washington Post as our hypothetical liberal and FOX News. They know it’s a biased source that often lies to them, but how often?

Here’s a Washington Post article saying that the 2020 election wasn’t rigged, and Joe Biden’s victory wasn’t fraudulent. In order to avoid becoming a conspiracy theorist, the conservative would have to go through the same set of inferences as the FOX-watching liberal above: this is a terrible news source that often lies to me, but it would be surprising for it to lie in this particular case in this particular way.

I think smart conservatives can do that in much the same way smart liberals can conclude the FOX story was real. The exact argument would be something like: the Marx article got minimal scrutiny. A few smart people who looked at it noticed it was fake, three or four people wrote small editorials saying so, and then nobody cared. The 2020 election got massive scrutiny from every major institution.

To be safe, I’ll reiterate up front that I am very confident the 2020 election was not rigged. But I didn’t get that confidence because liberal media sources told me everything was fine, I got it because I have a detailed model of the world where there’s lots of strong evidence pointing in that direction. That and the stakes involved are why I broke my usual no-unnecessary-politics rules in the post after the election was clearly decided to be very explicit that Biden had won the election – it was a form of cashing in one’s One Time in a high-leverage moment, bending one’s rules and paying the price.

As I write this, I haven’t yet looked at the WaPo article so I can first notice my expectations. My expectation is that the WaPo article will have a strong and obvious agenda, and that it will be entirely unconvincing to anyone who hadn’t already reached the conclusion that the 2020 election wasn’t rigged, and will primarily be aimed at giving people a reference with which to feel smug about the stupid people who were ‘fooled by the Big Lie’ and think the 2020 election was rigged.

Notice that Scott’s argument rests here on the difference between the election article and the Marx article. The Marx article should not be believed. But I notice that I expect both articles to be following the same standards of evidence and honesty. Whoops.

Enough preliminaries. Time to click and see what happens.

We can start with the headline. As we’ve established, the headline is always bullshit.

Guess what? There (still) wasn’t any significant fraud in the 2020 presidential election.

So that’s a really strange turn of phrase, isn’t it? That still?

I mean, what would it mean for there to have not to have been fraud in the 2020 presidential election at some point in the past, looking back on the election, but for that to have changed, and there now to have been fraud that previously had not been?

Either there was fraud or there wasn’t fraud. There’s no way for that answer to change retroactively, unless the fraud took place in the interim, which isn’t anyone’s claim. So the mentality and model behind this headline is saying that whether there was fraud is somehow an importantly different claim than whether or not someone did a fraudulent thing at the time.

Instead, it’s about what the current narrative is. The current narrative is that there wasn’t fraud. The past narrative is that there wasn’t fraud. Thus, there (still) wasn’t any fraud, because ‘there was fraud’ means ‘the narrative contains there being fraud.’

One can make claims about what is or is not in the narrative, under this lexicon, but there isn’t an obvious combination of words that says whether or not someone did a physical act, only whether or not someone is generally said to have committed that act.

In other words, under this system, if I ask ‘was there significant fraud in the 1960 presidential election?’ I am asking whether the narrative says there was such fraud. And therefore the answer could be ‘no’ one day and ‘yes’ the next and then go back to ‘no’ based on what those who control the narrative prefer.

More charitably, one could interpret this as ‘there is still no evidence for’ (which is always false, there’s never no evidence of anything) or ‘there is still overwhelming evidence against’ (which is both stronger and has the benefit of being true) and having been cut down because headlines have limited space, and that I’m reading too much into this.

I don’t think so. The headline could have read “Evidence Still Overwhelmingly Says No Significant Fraud in 2020 Election” and been shorter. This was a choice. I think this headline is smug and has the asshole nature and makes it clear that this is how words are supposed to work and that none of this is an accident.

Let us begin.

It’s been more than a year since the 2020 presidential election ended according to the calendar, though, according to the guy who clearly and unquestionably lost that election, Donald Trump, things are still up in the air. For 400 days, Trump has been promising sweeping evidence of rampant voter fraud in that election. It’s eternally just around the corner, a week away. Two. It’s his white whale and his Godot. It’s never secured; it never arrives.

Yeah, that’s all going to get past a fact checker as defensible things to say, but: Who is the intended audience here? Who is this trying to inform? If you are reading this while previously believing the election was stolen, do you keep reading? Given the headline, what’s the chance you even got that far?

The entire article is written like this, with the baseline assumption that Trump is lying in bad faith and that claims of fraud are illegitimate.

Let’s push through the fact that the whole thing has the asshole nature and has no interest in providing anything but smugness while pretending to be #Analysis, and look at what the actual claims are that one might ‘have to be a conspiracy theorist’ not to believe, since that’s the core question, except man they make it hard.

Yet there he was, offering the same excuse once again when asked by the Associated Press. The occasion was AP’s exhaustive assessment of the 2020 election in which they uncovered fewer than 500 questionable ballots. Questionable! Not demonstrably fraudulent, but questionable. But Trump, never bound to reality, waved it away.

If you’re going to put lines like ‘Trump, never bound to reality’ into your statement, it’s really hard to complain that people on the other side aren’t viewing you as a credible source. You’re spouting obvious nonsense for the sole purpose of delivering snappy one-liners and then wondering why those who think the target of that putdown should be in the White House aren’t updating on your factual claims.

I mean, they end on this:

On the other hand, we have a guy who was documented as having said false things tens of thousands of times while serving as president continuing to insist that proof of wide-scale fraud is just around the corner.

But if you asked me to find tens of thousands of times the Washington Post has said that which was not, via a similar standard, do you think it would be hard?

We’re agreed that they lie all the time. And Scott is making the Bounded Distrust argument that this doesn’t much matter. That argument would need to apply equally to Donald Trump. And it seems like it does, in the sense that there are some statements he makes and I think he’s probably giving me new true information, and other times he makes statements and I don’t think that, and there’s a kind of concreteness that’s a large part of the distinction there.

And who is indeed sometimes bound to reality, and also often plays the game by exactly these rules. ‘Many people’ are saying X, you see, a lot of people, very good people. But not Trump, not directly, because those are the rules of the game. Similarly, when Cohen testified he noticed Trump being very careful with his word choices in private conversations, for similar (legal) reasons. Trump will also outright lie, of course, but he is a politician and a real estate developer so please don’t act quite so surprised and outraged. And he too is playing a game of this type and will choose when the price is too high and when it isn’t. The only difference is that he managed to ‘make a deal’ and thus pays lower prices. So he buys more and more brazen falsehoods, and occasionally he picks a falsehood that feels some combination of worthwhile and true to him and decides to double down on it.

Which is, as everybody knows, the rule for politicians. Who, with notably rare exceptions, will lie, to your face, all the time, about actual everything.

It’s worth noting that the linked-to report from Wisconsin, also in WaPo, was better on many dimensions, not perfect but definitely coming from a world in which there is more focus on physical world modeling and less on narrative.

When I focus purely on the facts from this article that seem like sufficiently detailed non-trivial physical claims that they have content that we could potentially rely upon, and edit to take out all the dripping contempt and hatred, what’s left is this.

  1. The AP’s assessment of the 2020 election uncovered 473 questionable ballots.
  2. “He said a soon-to-come report from a source he would not disclose would support his case,” the AP reported Trump saying. Trump did respond with: “I just don’t think you should make a fool out of yourself by saying 400 votes.”
  3. A total of 25.6 million cast ballots were cast in the states analyzed by the AP.
  4. We have multiple state-level reviews conducted by Trump allies suggesting that the vote totals in contested states was legitimate. There has been no person who has stepped forward and admitted participation in any sort of scheme to throw the election and no discovery of rampant, coordinated fraud save for an effort to cast ballots in Macomb County, Mich., that constitutes most of AP’s total from that state — an effort that didn’t actual result in ballots being counted. And then there’s AP’s broad analysis of the vote in all six states that found only piecemeal problems.

Here’s an important thing not on the list:

It often takes a while for states and counties to adjudicate dubious ballots. It’s a lengthy process, matching cast votes with actual voters. But counties have a sense now of how often votes might have been cast illegally. In sum: fewer than 500.

Because that, you see, is allowed to be functionally false, and also actually is functionally false, conflating different numbers at least three times.

It’s conflating the ballots cast in the states analyzed by the AP – 25.6 million – with the combined number of ballots cast, which was about five times that number. Whereas the AP analyzed only a subset of those 25.6 million ballets. And it is then implicitly stating that there is zero chance that any ballot not viewed as suspicious by the AP could have been cast illegally. While the chance of any given ballot cleared by the AP having been cast illegally is very low, there are ways to do this that would not show up on the ballot itself, and that would not have been detected.

When you’re willing to make this level of misstatement about the core question at issue, it makes it that much harder to know where you can still be credible.

Essentially what this is saying is:

  1. The number I’m giving you comes from somewhere.
  2. It doesn’t have to be the thing you naturally think it is.
  3. That number can represent a subset or otherwise be heavily misleading.

The window of what we are forced to treat as real keeps narrowing.

The original version of our Fact Statement #3 was, in fact, this:

Even if every one of those 473 cases was an actual example of fraud, it’s out of a total of 25.6 million cast ballots.

Which implies that those 25.6 million ballots were all analyzed as part of the AP’s work. They weren’t. I had to realize this and back-edit to fix that.

Clicking through to the AP article provided clarity on many things, but still the whole thing boils down to whether or not you trust the Associated Press to do an investigation like this. I don’t think it makes you a ‘crazy conspiracy theorist’ to think that the AP was not, via this method, going to detect all or even most potential forms of fraud that might have taken place.

If I imagine to myself that Omega (a hypothetical omniscient omnipotent always fully honest entity) told me the election was fraudulent somehow, and then I’m told that the AP report is about to come out, I notice I still expect the AP report not to find anything. If there was anything that they would have been forced to find that way, someone else would have already found it. The AP doesn’t have to lie to simply not notice things, and given who they are I expect them to be very good at not noticing.

So all of this boils down to this:

  1. Liberal sources continue to push narrative that there was no significant fraud.
  2. Liberal sources continue to push narrative that all specific physical claims of significant fraud have been debunked.
  3. Trump continues to promise that he’ll come up with evidence Real Soon Now.
  4. The evidence is not currently present, because otherwise Trump would say so.

That’s true as far as it goes.

And you know what? Points three and four are actually really super strong evidence that no one has this kind of concrete evidence of fraud. It’s the kind of very specific claim – that Trump is not saying X – that if false would be rapidly exposed as false because Trump would be very clear he was doubling down on X and this would be reported.

Thus, when Scott says this:

In order to avoid becoming a conspiracy theorist, the conservative would have to go through the same set of inferences as the FOX-watching liberal above: this is a terrible news source that often lies to me, but it would be surprising for it to lie in this particular case in this particular way.

I say no, absolutely not. The article in question is saying a mix of that which is, and that which is not, and a lot of that which is but is mostly designed to imply that which is narratively convenient without regard to whether or not it is true.

You can reasonably argue that there are particular statements within the article that one can be highly confident from context are being stated accurately. But one can accept those particular statements without it forcing one to accept that the election wasn’t stolen. There’s no logical incompatibility here.

Part of what’s going on is that this ‘conspiracy theorist’ label is a threat being used, and you have to do things to avoid labeled that way. In particular, you need to notice what everybody knows is a conspiracy theory right now, and avoid advocating for it. If that changes and something (for example, the lab leak hypothesis or UFOs) stops being considered a conspiracy theory, you can then switch your tune.

Things like this Washington Post article tell us nothing we don’t already know. All of the work is relying on this line and similar logic:

The 2020 election got massive scrutiny from every major institution.

The core argument is that the absence of evidence is, in this context with this many people looking this hard to find something, and with the magnitude of the necessary efforts to pull this off and the resulting amount of evidence that would be available to be found, and the number of people who could potentially talk, very strong evidence of absence. That the amount of ‘evidence of fraud’ that was found is about the amount you’d expect to find if there was no significant fraud and this kind of effort, if anything it’s less than that. It’s surprising that, even with nothing to find, stuff more suspicious than this couldn’t be found.

One could say that the liberal media would suppress such findings, and no doubt some parts of it would attempt to do so if such findings arose, but there are enough media sources on the other side that we need not worry much about such suppression happening without being noticed.

The liberal media could and did essentially use their One Time on Donald Trump in various ways, and paid the price in future credibility for doing so, but even with that it wouldn’t have been enough to sell us a centrally fraudulent election in a way that couldn’t be noticed.

All of the claims of fraud were even politely registered in advance as claims that would be made no matter what if the wrong side won, so they’re actual zero evidence of anything except in their failure to be better substantiated. Whereas if there was big fraud, we would almost certainly know. And the reason for that is that the ones claiming fraud realized that the distrust of institutions was no longer sufficiently bounded to convince people not to believe such fraud claims, so there was no incentive not to make the claims regardless of the degree of fraud.

Combine that with a reasonable prior, and you get extremely high confidence of no fraud.

What you don’t get is especially bounded distrust in the media sources involved.

As I was writing this Marginal Revolution linked to this excellent post about why the USA is unlikely to face civil war. Among other things, it notices that various measurements of America’s democracy were altered to make Trump look scary in ways that don’t make any sense. Then America’s past was retroactively made worse to make it consistent with the ratings they gave for modern America to make Trump look maximally bad and also get in digs on the outgroup while they were at it. You can fairly say once again, blah blah blah, none of that is specific actual physical world falsifiable claims so of course all such things were pure political propaganda, physical world falsifiable claims are different. But this kind of thing is then cited as a ‘source’ to back up claims and sounds all scientific and tangible even though it’s not, and is an example of something that wouldn’t have happened twenty years ago (as, in they went back and did it retroactively because it was too fair back in the day) so it’s an example of the war on memory in such matters and also the decay of the bounds of distrust. And also, these are ‘experts’ giving their opinions, so now ‘experts’ who aren’t giving physical world falsifiable (in practice, not in theory) claims need to also be ignored by that standard.

Basically, I’m saying no, you can’t evaluate any of this by saying ‘look at all these experts’ and ‘look at all these institutions’ without also using your brain to think about the situation, the counterfactuals and the likelihood ratios of various observations and applying something that approximates Bayes Rule.

I also am pretty sure that Scott did exactly the thing where you at least implicitly calculate a bunch of likelihood ratios of various observations and applied Bayes Rule and came to the same conclusion as everyone else who did this in good faith in this case.

Science™

Scott tells the story this story.

According to this news site, some Swedish researchers were trying to gather crime statistics. They collated a bunch of things about different crimes and – without it being a particular focus of their study – one of the pieces of information was immigration status, and they found that immigrants were responsible for a disproportionately high amount of some crimes in Sweden.

The Swedish establishment brought scientific misconduct cases against the researchers (one of whom is himself “of immigrant background”). The first count was not asking permission to include ethnicity statistics in their research (even though the statistics were publicly accessible, apparently Swedish researchers have to get permission to use publicly accessible data). The second count was not being able to justify how their research would “reduce exclusion and improve integration.”

It counts as ‘scientific misconduct’ for you to not be able to justify how your research would ‘reduce exclusion and improve integration.’

Which is odd.

It means it is official policy that wrongfacts are being suppressed to avoid encouraging wrongthink and wrongpolicy.

It also means that we can no longer have a thing called ‘scientific misconduct’ that one can use to identify sources one cannot trust, since that now could refer to wrongfacts. If someone says ‘that person is accused of scientific misconduct’ I need to be very careful to get the details before updating, and if I don’t I’m effectively reinforcing these patterns of censorship.

But, Scott says, scientists have the decency to accuse them of misconduct for failure to reduce exclusion. This has the benefit of making it clear that this is an act of censorship and suppression rather than that the scientists did something else wrong, for anyone paying attention. If the claims were false, the scientists cracking down on wrongfacts would say the facts in question were wrong. By accusing someone of saying wrongfacts but not saying the wrong facts are wrong, you’re essentially admitting the wrongfacts are right. So this gives you, in this model, something to go on.

I believe that in some sense, the academic establishment will work to cover up facts that go against their political leanings. But the experts in the field won’t lie directly. They don’t go on TV and say “The science has spoken, and there is strong evidence that immigrants in Sweden don’t commit more violent crime than natives”. They don’t talk about the “strong scientific consensus against immigrant criminality”. They occasionally try to punish people who bring this up, but they won’t call them “science deniers”.

Let me tell you a story, in three acts.

  1. All masks don’t work unless you’re a health professional.
  2. All masks work.
  3. Cloth masks don’t work.

At each stage of this story, scientists got on television to tout the current line. At each stage of this story, the ‘science denier’ style labels got used and contrary views were considered ‘dangerous misinformation.’

Yes, we did learn new information to some extent, but mostly we knew the whole story from the beginning and it’s still true now. Cloth masks are substantially better than nothing, better masks are much better. Also the super-masks like P100s (or the true fashion statements that work even better) are far better than N95s and you’re basically never allowed to mention them or advocate for mass production. And yeah, we all knew this back in March of 2020, because it’s simple physics.

I could also tell you a story about vaccines. Something like this:

  1. Vaccines are being rushed.
  2. Vaccines are great and even prevent all transmission and you’re all set.
  3. Vaccines are great but you still have to do all the other stuff and also you need a booster even if you’re a kid unless you’re in one of the places that’s illegal. But only the one, definitely, that’s all.

And that’s entirely ignoring the side effect issue.

Once again, yes, you could say that the information available changed. On boosters, I’m somewhat sympathetic to that, and of course Omicron happened, but don’t kid yourself. Motivations changed, so the story changed.

Then there’s the lab leak hypothesis. And the other lab leak hypothesis.

Then there’s social distancing and ‘lockdowns’ and protests where the scientists declared that social justice was a health issue and so the protests weren’t dangerous. Which are words that in other contexts have meaning.

Then there’s the closing of the schools and remote learning and telling us masks and the other stuff isn’t doing huge damage to children.

Then there’s travel restrictions.

There’s the WHO saying for quite a long time that Covid isn’t airborne. 

There are the claims early on of ‘no community spread’ while testing was being actively suppressed via the CDC requiring everyone to use only its tests when it knew they didn’t work.

There’s Fauci saying we’d get to herd immunity at one number, then saying that when we’d made enough progress on vaccination he felt free to increase the number a bit more, indicating he didn’t care about what the real number was. And he wasn’t alone. 

And so on.

And in each case, the relevant ‘expert’ people who are wearing official ‘trust the science’ lapel pins explicitly lied, over and over again, using different stories, right to our f***ing faces. While arranging for anyone who disagrees with them to be kicked off of social media or otherwise labeled ‘dangerous misinformation.’ Then they lied and say they didn’t change their story.

So when we say that scientists ‘don’t lie directly’ we need to narrow that down a bit.

Can we say ‘don’t lie directly about specific actual physical world falsifiable claims?’

I mean, no. We can’t. Because they did and they got caught.

There’s still some amount of increasing costs to increasingly brazen misrepresentations. That’s why, in the Swedish example, we don’t see direct false statements to deny the truth of the claims made. The claims made are too clearly true according to the official statistics, so opening up yourself like that would only backfire. But that’s a tactical decision, based on the tactical situation.

This is, as Scott says, a game with certain rules. But not very many.

If there is a published paper or even pre-print in one of many (but not all) jurisdictions, I mostly assume that it’s not ‘lying about specific actual physical world falsifiable-in-practice-if-false claims.’

Mostly. And that’s it. That’s all I will assume about the paper.

I will not assume it isn’t p-hacked to hell, that it has any hope of replication, that anything not explicitly mentioned was done correctly, that the abstract well-described the methodology or results, that their discussion of what it means is in good faith, or anything else, except where the context justifies it. I may choose to do things like focus on the control variables to avoid bias.

Outside of the context of an Official Scientific Statement of this type, even more caution is necessary, but mostly I still would say that if it’s something that, if false, I could prove was false if I checked then the scientist will find a way to not quite say the false thing as such.

So yeah, anthropogenic global warming is real and all that, again we know this for plenty of other good reasons, but the reasoning we see here about why we can believe that? No.

And that suggests to me that the fact that there is a petition like that signed by climatologists on anthropogenic global warming suggests that this position is actually true. And that you can know that – even without being a climatologist yourself – through something sort of like “trusting experts”.

This is not the type of statement that we can assume scientists wouldn’t systematically lie about. Or at least, it’s exactly the type of statement scientists will be rewarded rather than punished for signing, regardless of its underlying truth value.

That’s mostly what the petition tells you. The petition tells you that scientists are being rewarded for stating the narrative that there is anthropogenic global warming. And they would presumably be severely punished for saying the opposite.

Both these statements are clearly true.

The petition does not tell you that these people sincerely believe anything, although in this case I am confident that they mostly or entirely do. It definitely does not tell you that these people’s sincere beliefs are right, or even well-justified, although in this case I believe that they are. This kind of petition simply does not do that at this time. Maybe we lived in such a world a while ago. If so, we live in such a world no longer.

But why am I constantly putting in those reminders that I am not engaging in wrongthink? Partly because I think the wrongthink is indeed wrong and I want to help people have accurate world maps. Partly to illustrate how ingrained in us it is that there is wrongthink and rightthink and which one this is, and that this petition thus isn’t providing much evidence. And partly, because I really don’t want to be taken out of context and accused of denying anthropogenic global warming and have that become a thing I have to deal with and potentially prevent me from saying other things or living my life. Or even have to answer the question three times in the comments. And while I don’t think I was in any danger of all that here, I can’t be sure, so better safe than sorry.

In my case, if I believed the local wrongthink, I would avoid lying by the strategy of being very very quiet on the whole topic because I wouldn’t want to cash in this type of One Time on this particular topic and risk this being a permanent talking point whenever my name comes up. Wouldn’t be Worth It.

Others are surely thinking along similar lines, except not everyone has the integrity and/or freedom to simply say nothing in such spots. In any case, no, the petition did not tell me anything I did not already know, nor do I expect it to convince anyone else to update either.

Then Scott goes on to say this.

(before you object that some different global-warming related claim is false, please consider whether the IPCC has said with certainty that it isn’t, or whether all climatologists have denounced the thing as false in so many words. If not, that’s my whole point.)

So it sounds like the standard is specifically that the IPCC does not make statements that false things are definitely true. Whereas if ‘some climatologists’ make such claims, that’s unsurprising. So when enough scientists of various types go around saying we are literally all going to die from this and manage to convince a large portion of an entire generation to think they are so doomed they will never get to grow old, we can’t even treat that as evidence of anything, let alone call them out on that, because the IPCC hasn’t specifically said so. I mean, I checked and they don’t appear to have said anything remotely similar. 

Yet I don’t see them or any other ‘experts’ standing up to boldly tell everyone that yes we have much work to do but maybe we can all calm down a bit. And maybe we should avoid the overselling because it will cause people to think such ‘experts’ can’t be trusted. Whereas I see other ‘experts’ adding fuel to this fire, presumably because they think that only by getting people into that level of panic can they get people to actually do something. A potentially noble motive to be sure, depending on details and execution, but not exactly the names you can trust.

Some people wonder how so many people could not Trust the Science™ in such matters. I don’t wonder about that.

Nor do I think this is the reason Scott believes in AGW. Does Scott look like the type of person who says ‘oh all these experts signed a statement so I’m going to believe this important fact about the world without checking?’ No. No he does not. Scott is the type of person who actually looked at the evidence and evaluated what was going on for himself, because that’s what Scott does and the only mystery is how he does so much of it so quickly. Even for me, and by not-Scott ordinary-human standards I do a lot of analysis very quickly.

Ivermectin One Last Time Oh Please God Let This Be The Last Time

Last year I explained why I didn’t believe ivermectin worked for COVID. In a subsequent discussion with Alexandros Marinos, I think we agreed on something like:

1. If you just look at the headline results of ivermectin studies, it works.

2. If you just do a purely mechanical analysis of the ivermectin studies, eg the usual meta-analytic methods, it works.

3. If you try to apply things like human scrutiny and priors and intuition to the literature, this is obviously really subjective, but according to the experts who ought to be the best at doing this kind of thing, it doesn’t work.

4. But experts are sometimes biased.

5. F@#k.

In the end, I stuck with my believe that ivermectin probably didn’t work, and Alexandros stuck with his belief that it probably did. I stuck with the opinion that it’s possible to extract non-zero useful information from the pronouncements of experts by knowing the rules of the lying-to-people game. There are times when experts and the establishment lie, but it’s not all the time. FOX will sometimes present news in a biased or misleading way, but they won’t make up news events that never happen. Experts will sometimes prevent studies they don’t like from happening, but they’re much less likely to flatly assert a clear specific fact which isn’t true.

I think some people are able to figure out these rules and feel comfortable with them, and other people can’t and end up as conspiracy theorists.

A conspiracy theorist, officially now defined as anyone believing the Official Lying Guidelines are more flexible than you think they are (see: everyone driving slower than me is an idiot, anyone driving faster than me is a maniac).

Scientists engaging in systematic suppression of Ivermectin trials via various tactics? Well, of course. Scientists making certain specific kinds of false statements that go against the ‘rules’? Conspiracy theory. Even though the rules keep loosening over time, and sometimes some things labeled ‘conspiracy theory’ turn out true, and also many things labeled ‘conspiracy theory’ don’t actually even require a conspiracy, that’s just a way of dismissing the claims.

Scott wrote a long post about Ivermectin. In that post, did Scott rely on ‘experts’ to evaluate the various papers? No, he most certainly did not. Scott actually looked at the papers and considered the evidence on each one and made decisions and then aggregated the data. And then, after all that, he took a step back, looked holistically at the situation, found it best matched a  hypothesis from Avi Bitterman (worms!) and went with it, despite no ‘experts’ having endorsed it, and then a lot of people went ‘oh yeah, that makes sense’ and adopted the conclusion, which is how this works, is exactly how all of this works, that’s Actual Science rather than Science™.

As in, yeah, step three above is true, the ‘experts’ definitely reach this conclusion. But also we looked at exactly why those experts got to that conclusion, and story checks out. Also Scott looked in detail himself and got a more interesting but fundamentally similar answer.

Yes, experts are sometimes biased, if you’re being charitable, or ‘engaged in an implicitly coordinated suppression of information in conflict with the current narrative’ if you’re being more realistic. Also, sometimes they’re simply wrong, they have limited information to work with and limited cognition and lousy incentives and lives and this whole science thing is hard, yo. That’s why Scott had to spend countless hours doing all that work for himself rather than ‘Trusting the Science™.’ Which looks a lot different than ‘the experts wouldn’t lie about this particular thing so of course Ivermectin doesn’t work.’

I mean, the experts still haven’t come around to the Vitamin D train, so ‘the experts aren’t impressed by the evidence’ isn’t exactly what I’d think of as a knock-down argument against non-risky Covid treatments.

Also, remember the rules that Scott mostly agrees upon. The scientists aren’t allowed to say anything provably false, but they are allowed to suppress studies and other information they don’t like by making isolated demands for rigor.

Which is exactly what Alexandros claims they are doing. I can confirm this more generally because I spent a bunch of time talking to him as well. Then, in Alexandros’ model, having raised enough FUD (fear, uncertainty and doubt) around the studies in question, and using that to cast doubt on any that they couldn’t do hit jobs on, they go and say ‘no evidence’ which is a standard accepted way to say that which is not, and that’s that. You don’t even have to tell the scientists explicitly to do that because they notice the narrative is that Ivermectin is outgroup-branded and doesn’t work, and that’s that. In all my conversations with Alexandros, I can’t remember him ever claiming any scientist outright lied in the way Scott says they don’t lie. His story in no way requires that.

Which, again, is why Scott had to spend all that time looking himself to know for sure.

Once again, I agree with Scott on the bottom line. As far as I can tell, Ivermectin doesn’t work.

But once again, I don’t think Scott’s stated algorithm is a good one, although once again I happily don’t think Scott is using his stated algorithm in practice. I think he’s mostly using mine, with the main difference being that I think he hasn’t sufficiently adjusted for how much the goalposts have been moved.

The real disagreement between Scott and Alexandros here is exactly that. Alexandros thinks that scientists suppressed Ivermectin using arguments they would have been able to successfully make in exactly the same way whether or not Ivermectin worked. Thus, he claims that those arguments provide no evidence against Ivermectin, whereas there is other evidence that says Ivermectin works. Scott thinks that there are enough hints in the details and rigor of the arguments made that yes, they constitute real and strong evidence that Ivermectin does not work.

More likely, Scott noticed that the people pushing for Ivermectin were part of the Incorrect Anti-Narrative Contrarian Cluster who also push a bunch of other anti-narrative things that are not true, rather than part of the Correct Contrarian Cluster (CCC). There weren’t people who otherwise were playing this whole game correctly but also happened to buy the evidence for Ivermectin. Whereas those who advocated for Ivermectin were reliably also saying vaccines were dangerous or ineffective, and other anti-Narrative claims that were a lot less plausible than Ivermectin, usually along with a bunch of various assorted obvious nonsense.

Which in turn meant that when one did look at the evidence, the cognitive algorithms that caused one to support Ivermectin were ones that also output a lot of obvious nonsense and were functioning to align and appeal to an audience with this uniform set of obvious nonsense beliefs, and when something in that group is investigated it turns out to be nonsense or in violation of one of the sacred Shibboleths, so it may be completely unfair and a potentially exploitable strategy but as a Bayesian when one sees something in that cluster that doesn’t violate an obvious sacred Shibboleth it is safe to presume it is nonsense. And if it does violate a Shibboleth, then hey, it’s violating a Shibboleth, so tread carefully.

One can (and whether one realizes it or not, one does to some extent) use it in the climate change example, noticing that full denial of climate change is very much part of the Incorrect Anti-Narrative Contrarian Cluster (ICC), while also noticing that moderate positions are conspicuously not in the ICC but rather in the CCC.

Of course, that’s a level of attention paying and reasoning that’s in many ways harder than doing the core work oneself, but it’s also work that gets done in the background if you’re doing a bunch of other work, so it’s in some sense a free action once you’ve paid the associated costs.

One must of course be very very careful when using such reasoning, and make sure to verify if the questions involved are actually important. If you treat the CCC as true and/or the ICC as false than you are not following the algorithm capable of generating the CCC or rejecting the ICC. I mean, oh yes, this is all very very exploitable, as in it’s being exploited constantly. Often those trying to suppress true information will try to tar that information by saying that it is believed by the ICC. Although they are rather less polite and very much do not call it that.

But although all this did cause Scott to have a skeptical prior, Scott makes it clear that he came into his long analysis post not all that convinced. Hence the giant looking into it himself.

I also notice that Scott didn’t choose any examples where the narrative in question is centrally lying to us, so it’s hard to tell where he thinks the border is, until the final note about the harvest.

Glorious Harvests

Scott’s next argument is that our Official Narrative Pronouncements can be thought of as similar to Soviet pronouncements, like so.

But also: some people are better at this skill than I am. Journalists and people in the upper echelons of politics have honed it so finely that they stop noticing it’s a skill at all. In the Soviet Union, the government would say “We had a good harvest this year!” and everyone would notice they had said good rather than glorious, and correctly interpret the statement to mean that everyone would starve and the living would envy the dead.

Imagine a government that for five years in a row, predicts good harvests. Or, each year, they deny tax increases, but do admit there will be “revenue enhancements”. Savvy people effortlessly understand what they mean, and prepare for bad harvests and high taxes. Clueless people prepare for good harvests and low taxes, lose everything when harvests are bad and taxes are high, and end up distrusting the government.

Then in the sixth year, the government says there will be a glorious harvest, and neither tax increases nor revenue enhancements. Savvy people breath a sigh of relief and prepare for a good year. Clueless people assume they’re lying a sixth time. But to savvy people, the clueless people seem paranoid. The government has said everything is okay! Why are they still panicking?

The savvy people need to realize that the clueless people aren’t always paranoid, just less experienced than they are at dealing with a hostile environment that lies to them all the time.

And the clueless people need to realize that the savvy people aren’t always gullible, just more optimistic about their ability to extract signal from same.

I mean the clueless people aren’t exactly wrong. The government is still lying to them in year six, in the sense that the harvest is unlikely to be what you or I would call ‘glorious,’ and they will doubtless find some other ways to screw the little guy that aren’t taxes or revenue enhancements.

But if that’s all it is, then the point is essentially correct. There are rules here, or rather there are incentives and habits. The people are responding to those incentives and habits.

That doesn’t mean the ‘savvy’ position is reliable. Being savvy relies on being unusually savvy, and keeping track of how far things have moved. Every so often, the goalposts got moved, you think you know what ‘good’ or ‘glorious’ means, but you’re using the old translation matrix, and now you’re wrong, and often that’s because people noticed the translation matrix people were using and wanted to control the output of that matrix.

Those rules are anti-inductive, in the sense that they depend on the clueless remaining clueless. If the clueless did not exist, then the statements stop serving their purpose, so they’d have to ramp up (or otherwise change) the translation system. At some point, the government cashes in a One Time to say ‘glorious’ instead of ‘good,’ the living still envy the dead, and now if the system keeps surviving ‘glorious’ means ‘the living will envy the dead’ and ‘legendary’ means we will get to put food on the table this year. Then at some point they cash that in too, and so on. In other less centralized contexts, this word creep is continuous rather than all at once.

Then at some point the translation system resets and you start again, with or without the system of power underlying it collapsing. One way for this to happen is if ‘glorious’ already means ‘the living will envy the dead’ and I say ‘lousy’ then that can’t be intended to be translated normally, so I might actually honestly mean lousy without thinking the living will envy the dead, and so the baseline can reset.

But if you play this game, you by construction have to lose a large percentage of the people who will be confused what you’re doing. It’s designed to do that. One can’t then look at the clueless and tell them to get a clue, because there’s a fixed supply of clues.

If the system is distributed rather than centrally determined, and it’s a bunch of people on social media running around labeling things as other things, then you see a gradual ramping up of everything over time as people adjust expectations and get wise to the game, or as the Narrative’s forces win battles to expand their powers and then launch new attacks on the opposition. If I want to say something is glorious I have to be two steps ahead of whatever I view as the ‘standard’ description. Other similar dynamics exist in other places where words meanings can be changed or expanded over time, because those words serve purposes.

Bounds, Rules, Norms, Costs and Habits

Scott views bounded distrust as a game with rules and lines. There are some lines you mostly obey but sometimes cross at a price, and some lines you don’t cross.

I’d modify that to say that there mostly aren’t lines you simply do not cross. There are only lines that are expensive to be caught crossing when similar others are not also caught crossing them.

This is a variant of having correlated debts, or losing money in the same way those around you lose money. You mostly only get punished for getting singled out as unusually bad. Thus, the more you are pushing the same lies as others and breaking the same rules, especially as part of The Narrative, you are effectively protected, and thus the price of breaking the rules is far lower.

When deciding what to do, various players will rely on some combination of bounds, rules, norms, costs and habits. Mostly, they’ll do whatever they are in the habit of doing, and those habits will adjust over time based on what is done, rather than thinking carefully about costs and benefits. This can also be thought of similarly as them following and over time changing the norms that are being locally and globally followed. They’ll look at the costs and benefits of following or breaking what they think of as ‘the rules’ in various ways, mostly intuitively, and decide what to do about that in context.

Centrally, most official, news and ‘expert’ are looking to balance the opportunity to show their loyalty to and support the Narrative that they’re getting behind, and the rewards for doing that, against the penalties that might be extracted if they are caught getting too far out of line and doing things that are out of line, and thus hammered down upon.

It is out of line to go too far and get caught, to be too far removed from the underlying physical reality in ways that can be observed or proven, and thus that weaken the Narrative and your reputation. You lose points for losing points, more than you lose points for anything else.

It is also out of line to not go far enough, and to adhere too well to what used to be ‘the rules’ rather than scoring sufficient Narrative points. One must stay on brand. This, too, is sticking one’s neck out in a dangerous way.

The combination of these factors does often mean that there is effectively a calibrated response to any given situation. The details of what is said will be an intuitively but skillfully chosen balance of exactly what claims are made with exactly what level of specificity and rigor. Thus the chosen details of what is claimed and said actually can tell you quite a lot about the underlying physical world situation, if you can remain sufficiently well-calibrated in this and maintain the right translation matrix.

If you can do that, you can observe exactly how much smackdown occurs and in exactly what way, and know whether they’re smacking down something true, something unclear or something false. The problem is that there’s lots of inputs to that matrix, so without a lot of context you’ll often get it wrong. And also the rules keep changing, so you need to keep your matrix up to date continuously.

Combining a variety of sources improves your results. Different sources, even with similar overall trustworthiness, will have different costs, both external and internal/intrinsic, and be pushing somewhat different Narratives. By observing the differences in their responses, you can learn a lot about what’s going on by asking what would make all their responses make sense at once. Exactly who falls in line and in which ways, with what levels of weaseling, is no accident.

The principle that This is Not a Coincidence Because Nothing is Ever a Coincidence will serve you well here on the margin.

What Is the Current Translation Matrix?

I’m not going to justify this here, but seems only fair to tell where I am at. A full explanation would be beyond the scope of this (already very long) post, hence the incompleteness warning up front.

Here’s mine for politicians:

They are on what I call simulacra level 4, and they are moving symbols around without a direct connection to the underlying reality. Mostly, presume that politicians are incapable of means-ends reasoning or thinking strategically or engaging seriously with the physical world, and what comes out of their mouths is based on a vibe of what would be the thing one would say in a given situation, and nothing more.

Assume by default that they lie, all the time, about everything, including intentionally misstating basic verifiable facts, but that to model them as even thinking on those terms is mostly an error. Also assume that when they do say that which is not, if it is within the ability and the interests of the opposition to call them out on it then they will do so, and that the politician has intuitions that consider this and its consequences somewhat when deciding how brazenly to lie. While noting that in some situations, being called out on a lie is good for you, because it draws attention to the proper things and shifts focus the way you want.

Information about what type of vibe a politician is looking to give off is useful in terms of figuring out what vibe they are looking to give off, which can change when circumstances change. Explicit promises carry non-zero weight to the extent that someone would be mad at them for breaking those promises and that this would have felt consequences that can impact their intuitions, or other ways in which it directly constrains their behaviors.

Also assume that they will act as if they care about blame on about a two week time horizon, so the consequences of things being proven false mostly have to back-chain in time to punish them within two weeks, or no one will care.

And that’s it.

For traditional news sources like the Washington Post, CNN or FOX:

Assume until proven otherwise that they are engaging primarily in simulacra level 3 behavior, pushing the relevant Narrative and playing to and showing their loyalty to their side of the dialectic to the extent possible. Thus, subject to the constraints they are under, assume they are giving the optimal available-to-them arguments-as-soldiers (also rhetoric-as-soldiers) version of whatever thing they are offering, and calibrate based on that.

Those constraints are a very narrow form of technically correct, the best kind of correct. Or rather, a very narrow form of not technically incorrect, with something that could be plausibly held up as some sort of justification, although that justification in turn need not be verified or accurate. So you can often have a circular information cascade with no actual evidence.

Basically, if a statement or other claim is:

  1. A specific falsifiable claim about the physical world.
  2. Could, if false, in actual practice, be falsified in a way that would ‘count.’

Then it has to technically be laid out in a not false way, for example by saying that ‘source Y (or an unnamed source) said that X’ instead of X. The Marx/Lincoln story is an excellent example of exactly where this line is. Assume that like that story, everything will go exactly up to that line to the extent it is useful for them to do so, but not over it. Then, based on what content is included, you know they didn’t have any better options, and you can back-chain to understand the situation.

Like politicians, they mostly also care about blame on a two-week time horizon, so there needs to be a way for the anticipated consequences of crossing lines and breaking rules to back-chain and be visible within two weeks, or they’ll mostly get ignored.

Assume that they are constantly saying things similar to ‘wet ground causes rain’ when they want to be against wet ground, and also framing everything with maximum prejudice. Everything given or available to them will be twisted to inflict maximum Narrative (and get maximum clicks otherwise) wherever possible, and analyze output on that basis. Assume that they outright lied to their sources about what the story was about, or what information would be included, or anything else, if they found this to be useful and worth more than not burning their source. Also remember that if you are about to be a source.

Basically, yes, there is a teeny tiny sense in which they will not outright lie, in the sense that there is a Fact Checker of some kind who has to be satisfied before they can hit publish, but assume it is the smallest sense possible while still containing at least some constraint on their behavior.

Remember that any given ‘source’ can, for example, be a politician.

Remember that if the source is an ‘expert’ that means exactly nothing.

Also assume that headlines have (almost) zero constraints on them, are written by someone who really, really doesn’t care about accuracy, and are free to not only be false but to directly contradict the story that follows, and that they often will do exactly that.

If information is absent, that only means that such information would have been unhelpful and they don’t think it would be too embarrassing to simply ignore it, for which the bar is very high. They are under zero obligation to say anything they don’t feel like saying, no matter how relevant.

If there’s an editorial, there are no rules.

If it’s in any way subjective, there are no rules.

Words mean whatever the Narrative decided they mean this week.

And that’s it.

(I will note that in my experience, Bloomberg in particular does not do this, and can be trusted substantially more. There likely are also others like that, but this should be your default.)

For ‘scientists’ and ‘experts’:

If you want to find a ‘scientist’ or ‘expert’ to say any given thing, you can.

If you have some claim that fits the Narrative, then unless it is a full strict-false-and-one-could-prove-it violation, you can get lots of experts/scientists to sign off on it. So all you’re learning is that this is part of the Narrative and isn’t definitely false.

You can look at the details of the dissent and the details of what is in the petition or official Narrative statement, and exactly who conspicuously did/said or didn’t say/do what and exactly what weaseling is there, and extract useful information from that, because they’re maximizing for Narrative value without going over the strict-false line.

Mostly any given expert will have slightly more constraints on than that, and will follow something similar to the news code, and will also have some amount of internal pressure that causes the vigor of endorsement to be somewhat proportional to the accuracy of the statement, but it’s also proportional to the magnitude of the Narrative pressure being applied, so one must be cautious.

The more technical the talking gets, the more you can trust it (to the extent you can understand it), there’s still some amount of dignity constraining behaviors in these ways in some places, but in other places it is mostly or entirely gone.

Also understand that the systems and rules are set up at this point to allow for very strong suppression of dissent, and creation of the illusion of consensus, through the use of social pressures and isolated demands for rigor and other such tactics, without need to resort to sharp falsifiable statements. Often the tactics and justifications involved in such moves are obvious nonsense when viewed by ordinary humans, but that is well within bounds, and failing to use such tactics is often not within bounds.

Expert consensus that is falsifiable-in-practice-in-a-punishing-way can still largely be trusted.

Expert consensus that is not that, not so much. Not as such. Not anymore. But you can sometimes notice that the consensus is unexpectedly robust versus what you’d expect if it wasn’t trustworthy. You can also use your own models to verify that what the experts are saying is reasonable, combined with other secondary sources doing the same thing, and combined with individual experts you have reason to trust.

You should definitely expect the experts in any given field to greatly exaggerate the importance of the field at every turn, and to warn of the dire consequences of its neglect and our failure to Do Something, and for there to be real consensus on that for obvious reasons, except with less shame or restraint than in the past.

And, again, that’s it.

There are other sources, specific sources, where the translation matrix is less extreme, and I of course do my best to draw as much as possible from such sources. There’s still almost always a long ways to go before getting to the level of trust that would be ideal, but there are many levels.

So What Do We Do Now?

We decide how much time and effort we want to spend maintaining our calibration and translation matrix, and for which sources.

Maintaining a high-quality translation matrix of your own is a lot of work. That work isn’t obviously worth it for you to do. There are three basic approaches here.

One is to basically stop caring so much about the news. This is a good strategy for many, and in most times. Before Covid, especially before Trump and when not doing any trading that relied on knowing what was going on, I was mostly implementing it. One can live the good life without caring about such matters. In fact, not caring often makes it easier. Thus, you don’t know what you can trust. But as long as you also don’t care, it’s fine. 

You know what’s going on hyper-locally, with your friends and family and work and neighborhood, and that’s it. For most of history, that was enough. 

This isn’t as easy as staying away from newspapers and other official news sources. You also have to deal with the constant stream of news-bringing on social media, and in real life from coworkers, friends and family, and so on. You might want to be done with the news, but the news isn’t voluntarily done with you. 

You’ll need to train yourself that when you see a post about today’s terrible news, you ask yourself only one question. Will this directly impact the local physical world in ways that alter my life, thus forcing me to care? Or not? If not, move on. If it’s political advocacy, or someone being wrong on the internet, definitely move on. Offline, you’ll need to follow similar procedures, which will require smiling and nodding.

You’ll also need to filter your incoming sources of non-news to filter out those who bring you too much news that isn’t directly relevant to your life, and especially those who bring you political advocacy. This leads to some tough choices, as there are sources that have a combination of worthwhile things and exactly what you want to avoid. They’re mostly going to have to go. 

A second option is to keep very careful track of the physical world conditions, do lots of your own work and not need to rely on secondary sources like newspapers. I assure you that mostly this is a lot of work and you only want to do this in carefully selected sub-realms. It’s taking the local approach and extending it to some non-local things, but it’s difficult and it’s time intensive, and mostly only makes sense if your conclusions are in turn going to be relied on by others. Also, it often needs to complement keeping up your translation matrix rather than substituting for it, as I can attest from experience.

The other option is division of labor and outsourcing.

If you can find a sufficiently trustworthy secondary source that analyzes the information for you, then you don’t need to worry about the trust level of their sources. That’s their problem.

Or to put it another way, you don’t have to have a fully general translation matrix. You only need to have a translation matrix for sources you want to get information from. You get to choose your portfolio of sources.

That can be as simple as your spouse or a good friend that you know you can trust. There is of course a risk of telephone problems if there are too many ‘links in the chain’ but such costs are often acceptable. Using a personal source has the extra advantage that they can filter for you because they have a good idea what is relevant to your interests.

It can also aggregate various community sources. There’s the obvious danger of information cascades here as there is elsewhere, as the upstream sources are still what they are, but it does provide some amount of protection.

You can also choose anything from one or more bloggers to a set of Twitter accounts to a newspaper, radio show or TV program you find to be unusually trustworthy. Or combine any or all of these and other sources.

I sometimes hear that someone has decided to outsource their Covid perspectives to me and my posts in this way. The posts are designed to allow you to think for yourself and reach your own conclusions, but also to save you the work of needing to maintain a detailed translation matrix while doing so, especially since I hope that the correct matrix for DWATV itself is very close to the identity matrix, except for the need to ‘translate into one’s own language’ since my way of framing and thinking about things has quirks and likely doesn’t exactly match yours. But that’s ideally about understanding rather than trust.

I have compiled a lot of sources over the years that I trust to be rather high up on a ‘pyramid of trust,’ meme version not currently ready for publication. This includes most (but not quite all) of my friends, since I value such trustworthiness and careful speaking highly, but even within that set there’s clear distinctions of how careful one needs to be with each source in various ways.

Everyone I list on my links and blogroll qualifies as someone I am mostly willing to trust. If they didn’t count as that, I wouldn’t list them.

That doesn’t mean I fully trust their judgment, or that it’s all created equal, but there’s a sense in which I can relax when engaging with such sources. There’s also, of course, a sense in which I can’t relax even when dealing with most of those sources, to varying degrees. I wish that were not so, but better to accept it than to pretend it’s not true.

The best sources, at least for my purposes, do an excellent job of being transparent about how trustworthy they are being in any given situation. Scott Alexander, as a prime example, is very good at this.

That’s the landscape on a personal and practical level.

Mostly I recommend, for keeping general tabs on the world, collecting a list of sources you’ve decided you can trust in certain ways, and then mostly trusting them in those ways while keeping an eye out in case things have changed. Then supplementing that with one’s own investigations when it matters to you.

For keeping tabs on your own local world, there are no shortcuts. You’ll have to do the work yourself.

But what about the global problem as a global problem? Sure, politicians have mostly always lied their pants on fire, but what to collectively do about this epic burning of the more general epistemic commons?

There are no easy answers there.

My blog is in part an attempt at an answer. This seems very much like a Be The Change You Want to See in the World situation. Thus, one can begin by striving to:

  1. Being a trustworthy source of information to the extent you can manage. This includes not silently dropping information whose implications you dislike.
  2. That means being clear on how and why you believe what you believe, and how high your confidence is in it.
  3. Explicit probabilities are great when appropriate.
  4. As is holding yourself accountable when you’re wrong.
  5. Not rewarding untrustworthy sources, including with undue attention. When appropriate, make it clear in what ways they cannot be trusted, but mostly don’t give them the oxygen of attention that they thrive on.
  6. Rewarding trustworthy sources, including with attention, spread the word.
  7. Focus on the physical reality, de-emphasize all versions of the Narrative. Look to figure out the gears underlying all this, and create common knowledge.
  8. Make it clear you are doing this, to provide reason to follow suit.

This doesn’t have to be about having a blog, or even a social media account, or the internet, or any kind of information projection at all. It’s about how people at all levels interact, in the world, with people. 

Note for Commenters: The no-politics rules are importantly not suspended here. Some amount of interaction with politics will be necessary. But beyond a clear emphasis on physical-world simulacra-level-1 considerations, advocacy of positions and partisan bickering remain right out. I stand by willing to use the delete button and potentially the ban hammer if necessary, while remaining hopeful they will not be necessary.  

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 10:15 PM

Thank you very much for the excellent analysis here; I’m not sure if this was an intended result, but after reading I’ve updated towards being more sympathetic towards people who believe what I perceive to be “baseless” conspiracy theories. All that needs to happen is for your trust to be even slightly misplaced (which is easy to do when emotions, family, and politics are involved), and BAM, suddenly that “insane worldview” sounds totally plausible… I didn’t really understand before reading this just how terrifyingly easy that slope can be.

I suppose one of my recent complaints about the rationalist community, on LessWrong and elsewhere, is that I feel like I am having to adjust my identity matrix away from the identity. There are certain subjects that I keep seeing people being wrong about, in the sense that they make more mistakes in a given direction, and then when they are called out on those mistakes, they acknowledge them in a begrudging way, with the tone of "I was wrong, but it's unfair to call me out on being wrong for this issue".

I'm purposefully avoiding mentioning any specific examples here, but I will note that in this essay, there were at least two times when I felt like what you were saying was outright false or implied something that was false, in the same manner as the statements in the Lincoln and Marx article. Usually I'm willing to overlook these moments in your writing, using the translation matrix I have built up, but it felt like carelessness to do that in an article condemning people for doing that.

I understand that without giving examples, I don't open the door to very much useful discussion. That's because I don't expect the comment section here to be able to discuss this issue without devolving into object-level discussion, and because I don't expect to have the conversational bandwidth to respond to comments to this very much.

You are welcome to PM me which two things you think they were. I think 2 such statements in 35 pages written at this speed is not that bad a rate, but I'd prefer it be zero.

And in general, I don't see a way around talking about it if we want to fix it, and that seems like a good use of a comments section?

If you could PM me as well with details I’d greatly appreciate it; I promise not to debate you on object-level issues here (even if I think you’re horrifically incorrect). I rely heavily on this community for information I don’t have time to research myself, so being aware of such meta-issues is really important to me.

those mistakes, they acknowledge them in a begrudging way,

That's pretty good, as it goes. 10% of people will grudgingly admit they are wrong , and 1% will do it gladly. The rest...never.

The masks story fits the template of bounded distrust. You sum it up here as:

Let me tell you a story, in three acts.

  1. All masks don’t work unless you’re a health professional.
  2. All masks work.
  3. Cloth masks don’t work.

At each stage of this story, scientists got on television to tout the current line. At each stage of this story, the ‘science denier’ style labels got used and contrary views were considered ‘dangerous misinformation.’

Those were the gestalt of the era, not the words that the top experts were saying. Focusing in just on the first act:

Look at the words that Fauci or the CDC were saying in March 2020, and it wasn't "wearing a mask won't reduce your chances of getting covid at all."

The quote from the CDC which got discussed on LW was "CDC does not recommend that people who are well wear a facemask to protect themselves from respiratory diseases, including COVID-19." This isn't directly saying anything about the world (except about what the CDC is recommending); the CDC generally speaks in imperatives rather than declaratives.

Fauci said sentences like "When you’re in the middle of an outbreak, wearing a mask might make people feel a little bit better and it might even block a droplet, but it’s not providing the perfect protection that people think that it is" (from this 90 sec video - IMO watching the video gives a better sense of what Fauci is up to). The content of that sentence is that the risk reduction from masks is probably > 0% but is < 100%, but it's said in a way that can easily give an impression that rounds off to "don't work." Some interesting rhetorical moves here, like comparing to the standard of "perfect perfect", and switching to very concrete language ("block a droplet") and tentative phrasing ("might even") when discussing the benefits rather than using any word like "effective" or "helps" or "prevents" which fits neatly into the concept templates that people are ready to act on.

It's the sort of situation where, if you're half paying attention to the chatter and inclined to believe what you hear, then you'll come away thinking that masks don't work. But my recollection is that, once I had reason to think that think mask-wearing would reduce my chances of getting covid (late Feb?), I didn't come across any instances where a person who would know clearly said 'no, you're wrong about that, masks don't work at all.'

I think the US Surgeon General’s tweet pretty straightforwardly has that as an interpretation, that wearing a mask as part of the general public does not reduce your Covid risk.

I agree with most of the criticisms of current media in this article but do not think I agree with the claim that things were better in 2015/2010/pre-Trump/in the Before Time.  I'm not entirely sure how to go about testing this claim, but my strong impression is that e.g. media treatment of science has been roughly this quality for approximately ever.   I don't think e.g. the recent media narrative around vaccines is meaningfully worse than e.g. the media narrative around nuclear power has been for a long time.

I think you are right, but zvi included this in his "wet ground causes rain" that you notice in any article about something you really are an expert on (either because it's your professional /scientific field, or, and to me it's often more enlightening and more brutal, because you or one of your close ones are part of the story). This indeed has always been the case. What is new is that you start to feel the narrative in the sources themselves, it's percolating down to experts (Mediatic ones first, which makes sense because media are their peers more than other experts, and you red lines are often about losing your pant in front of your peers) and to science journals (oped, summaries, then abstracts and conclusions and finaly (but i think we are not quite there) article cores. I think things are really changing: direct sources are more and more available to the public... But simultaneously get less and less trustworthy. Which means being close to sources is no longer a less strenuous alternative to building your own reality model and do a lot of cross checking... Maybe it also means reality becomes less relevant and narrative more relevant to decision making and personal success in more and more cases (more often and for more people), which i find super frightening...

direct sources are more and more available to the public... But simultaneously get less and less trustworthy.

The former helps cause the latter. Sources that aren't available to the public, or are not widely read by the public for whatever reason, don't face the pressure to propagandize - either to influence the public, and/or to be seen as ideologically correct by the public.

Of course influencing the public only one of several drives to distort or ignore the truth, and less public fora are not automatically trustworthy.

I continue to frequently refer back to my functional understanding of bounded distrust. I now try to link to 'How To Bounded DIstrust' instead because it's more compact, but this is I think the better full treatment for those who have the time. I'm sad this isn't seeing more support, presumably because it isn't centrally LW-focused enough? But to me this is a core rationalist skill not discussed enough, among its other features.

A lot of the jumbled-up unclear thoughts that have been tumbling around in my head for the past two years feel clarified by reading this. Excellent post

A better analysis than Scott's may be that the media lies on a fractal level. No matter which level you look at it, a certain number of things aren't true. So if you're looking at statements where the media is vague, you'll get lies, but if you look at more direct statements, you'll also get lies. It's a mistake to think "well, when I look at vague areas, I can determine what the lies are, so I can determine the lies by the media in general".

It's not hard to come up with recent examples where we were told something blatantly false by the media, like "Kyle Rittenhouse shot two black people" or "a January 6 rioter killed a police officer with a fire extinguisher".  But Scott has left himself an out where he can always claim that whoever told us that was honestly mistaken or got bad information (the phrase "reckless disregard for the truth" suggests itself), or that they aren't the right sort of authorities to count, or that they just hinted at the false statement (media do that all the time even for true things).

Where does experimental data fall in your translation matrix? Some data is well within what you describe as hyperlocal (e.g. I hear raindrops outside ; it must be raining) but other data has a much longer journey. For example, if you see an image of Mars from JPL labeled as "Taken by Sojourner on date... Time... Place... With such and such camera facing a heading of...degrees North"and with colors calibrated with the color chips with spectral frequencies x, y, and z mounted below the camera", how would that picture do after going through your translation matrix?

I'm specifically picking on Mars because (if I recall correctly), people did think it was reddish after looking at imagery for some non negligible amount of time until realizing that they messed up the photo processing.

And this is not easy for anyone on Earth to verify either. There are literally no humans on Mars who can say "Stop. You are wrong. The dirt is dirt colored, not red." It would be entirely fair to say that a mistake like that can not be corrected at all without spending billions of dollars on another Mars mission.

I hope this example is close enough to the physical world;I am not familiar with what you mean by simulacra levels. I hope you can see how these concerns have analogs that are much closer to home without explicitly name dropping any terrestrial historical events.

I interpret the Mars data literally. I assume they are relaying the pictures as they took them. Correcting for mechanical errors of this sort is needed, but a different kind of epistemic problem.

Essentially I do not expect enemy action here.

In other experiments it can be less clear, but that seems easy enough.

None of our institutions can be trusted to always tell the truth

That could be one of two significantly different claims:

  1. Our institutions know The Truth due to their excellent epistemology, but don't tell it because they are liars.

  2. Our institutions don't know The Truth ,because epistemology is difficult. They are just approximating it, and they say different thing.at different times because they are shifting their imperfect consensus according to information that comes in. AKA being less wrong.

3. Our institutions don't know The Truth because epistemology is difficult, and they are just approximating it. They are telling us neither because they are liars.

:-P