Bayes's Rule quantifies the notion of 'extraordinary claim' and 'extraordinary evidence'. An extraordinary claim is one with a low prior probability in advance of considering the evidence. Extraordinary evidence is evidence with an extreme likelihood ratio favoring the claim.
The likelihood ratio is defined as:
To obtain an extreme likelihood ratio, we need the bottom of the fraction to be very low. The top of the fraction being very high doesn't help much. If the top of the fraction is 99% and the bottom is 70%, that's still not a very extreme ratio, and it doesn't help much if the top is 99.9999% instead.
So to get extremely strong evidence, we need to see an observation which is very improbable, given "business as usual". This observation would be deserving of the title, "extraordinary evidence".
A corollary to "extraordinary claims require extraordinary evidence" was stated by Gwern Branwen as "ordinary claims require merely ordinary evidence". This corollary may be much more important in ordinary life, where we encounter more ordinary claims than extraordinary claims, but may be more tempted to reject them by demanding extraordinary evidence (motivated skepticism).
Consider the following hypothesis: What if there are Bookcase Aliens who teleport into our houses at night and drop off bookcases?
Bob offers the following evidence for this claim: "Last week, I visited my friend's house, and there was a new bookcase there. If there were no bookcase aliens, I wouldn't have expected that my friend would get a new bookcase. But if there are Bookcase Aliens, then the probability of my finding a new bookcase there was much higher. Therefore, my observation, 'There is a new bookcase in my friend's house,' is strong evidence supporting the existence of Bookcase Aliens."
In an intuitive sense, we have a notion that Bob's evidence "There is a new bookcase in my friend's house" is not as extraordinary as the claim "There are bookcase aliens" - that the evidence fails to lift the claim. Bayes's Rule makes this statement precise.
Bob is in fact correct that his observation, "There's a new bookcase in my friend's house", is indeed evidence favoring the Bookcase Aliens. Depending on how long it's been since Bob last visited that house, there might ceteris paribus be, say, a 1% chance that there would be a new bookcase there. On the other hand, the Bookcase Aliens hypothesis might assign, say, 50% probability that the Bookcase Aliens would target this particular house among others. If so, that's a likelihood ratio of 50:1 favoring the Bookcase Aliens hypothesis.
However, a reasonable prior on Bookcase Aliens would assign this a very low prior probability given our other, previous observations of the world. Let's be conservative and assign odds of just 1 : 1,000,000,000 against Bookcase Aliens. Then to raise our posterior belief in Bookcase Aliens to somewhere in the "pragmatically noticeable" range of 1 : 100, we'd need to see evidence with a cumulative likelihood ratio of 10,000,000 : 1 favoring the Bookcase Aliens. 50 : 1 won't cut it.
What would need to change for the observation "There's a new bookcase in my friend's house" to be convincing evidence of Bookcase Aliens, compared to the alternative hypothesis of "business as usual"?
As suggested by the Bayesian interpretation of strength of evidence, what we need to see is an observation which is nigh-impossible if there are not bookcase aliens. We would have to believe that, conditional on "business as usual" being true, the likelihood of seeing a bookcase was on the order of 0.00000001%. That would then take the likelihood ratio, aka strength of evidence, into the rough vicinity of a billion to one favoring Bookcase Aliens over "business as usual".
We would still need to consider whether there might be other alternative hypotheses besides Bookcase Aliens and "business as usual", such as a human-operated Bookcase Conspiracy. But at least we wouldn't be dealing with an observation that was so unsurprising (conditional on business as usual) as to be unable to support any kind of extraordinary claim.
However, if instead we suppose that Bookcase Aliens are allegedly 99.999999% probable to add a bookcase to Bob's friend's house, very little changes - the likelihood ratio is 99.99999% : 1% or 100 : 1 instead of 50 : 1. To obtain an extreme likelihood ratio, we mainly need a tiny denominator rather than a big numerator. In other words, "extraordinary evidence".
One contributed example: "A few years back, a senior person at my workplace told me that a new employee wasn't getting his work done on time, and that she'd had to micromanage him to get any work out of him at all. This was an unpleasant fact for a number of reasons; I'd liked the guy, and I'd advocated for hiring him to our Board of Directors just a few weeks earlier (which is why the senior manager was talking to me). I could have demanded more evidence, I could have demanded that we give him more time to work out, I could have demanded a videotape and signed affidavits… but a new employee not working out, just isn't that improbable. Could I have named the exact prior odds of an employee not working out, could I have said how much more likely I was to hear that exact report of a long-term-bad employee than a long-term-good employee? No, but 'somebody hires the wrong person' happens all the time, and I'd seen it happen before. It wasn't an extraordinary claim, and I wasn't licensed to ask for extraordinary evidence. To put numbers on it, I thought the proportion of bad to good employees was on the order of 1 : 4 at the time, and the likelihood ratio for the manager's report seemed more like 10 : 1."
Or to put it another way: The rule is 'extraordinary claims require extraordinary evidence', not 'inconvenient but ordinary claims require extraordinary evidence'.
In everyday life, we consider many more ordinary claims than extraordinary claims - ordinary claims whose inconvenience might tempt us to dismiss their adequate but ordinary evidence. Gwern's admonition "Ordinary claims require merely ordinary evidence" is probably much more important in everyday life.
A key feature of the Bookcase Aliens example is that the Bayesian acknowledges the observation of a new bookcase as being, locally, a single piece of evidence with a 50 : 1 likelihood ratio favoring Bookcase Aliens. The Bayesian doesn't toss the observation out the window because it's insufficient evidence; it just gets accumulated into the pool. If you visit house after house, and see new bookcase after new bookcase, the Bayesian slowly, incrementally begins to wonder if something strange is going on, rather than dismissing each observation as 'insufficient evidence' and then forgetting it.
This stands in contrast to the instinctive "Arguments as Soldiers" idiom where, having chosen not to believe in Bookcase Aliens on the basis of the evidence offered, you would treat that evidence as an 'enemy soldier' and say that it was no evidence at all, thus defeating it.
The Bayesian notion of "yes, that's favorable evidence, but it's not quantitatively enough evidence" also stands in contrast to the practice of taking any concession as a definitive victory. If true claims are supposed to have all their argument-soldiers upheld and false claims are supposed have all their enemy argument-soldiers defeated, then a single undefeated claim of support in any quantity stands as a proof. A Bayesian considers the bookcase observation to be locally a piece of evidence favoring Bookcase Aliens, just massively insufficient evidence. (In case of culture clashes, you might want to consider not conceding this aloud, but you should always be thinking it.)
An obvious next question is what makes a claim 'extraordinary' or 'ordinary'. This is a deep separate topic, but as an example, consider the claim that the Earth is becoming warmer due to carbon dioxide being added to its atmosphere.
To evaluate the ordinariness or extraordinariness of this claim:
These tests suggest that "Large amounts of added carbon dioxide will incrementally warm Earth's atmosphere" would have been an 'ordinary' claim in advance of trying to find any evidence for or against it - it's just how you would expect a greenhouse gas to work, more or less. Thus, one is not entitled to demand a digitally signed video recordings of carbon dioxide levels in 650 million BCE before believing it.
If you think that a proposition has a prior odds of 1 to a quadrillion, and then somebody presents evidence with a likelihood ratio of a billion to one favoring the proposition, you shouldn't say, "Oh, I guess the posterior odds are 1 to a million", but question whether either (a) you were wrong about the prior odds or (b) the evidence isn't as strong as you assessed. Seeing evidence with a strength of 90 decibels / 9 orders of magnitude / 30 bits supporting a false hypothesis, should, on average, happen about once every IT DIDN'T HAPPEN. See Pascal's Muggle.