Epistemic status: exploratory

Introduction: Starting point

One of the aspects of deconfusion that consistently trip people up is “where to start”. You have your phenomenon or cluster of intuitions, and then you… make it less confusing? There’s a bunch of heuristics (find as many examples as possible, focus on applications, try a naive synthesis…), but nothing that satisfying.

So one of my goals during my epistemology training is to look at a bunch of examples of deconfusion, and find out patterns about how they got started and how important/problematic were their starting points.

Let’s start with Sadi Carnot’s “Reflections on the Motive Power of Fire”, which founded thermodynamics as a science. And the fascinating part is that he seems to have drawn heavily on an analogy with his father Lazare Carnot’s work on a general theory of machines. At least that’s the thesis of philosopher of science John D. Norton in “How Analogy Helped Create the New Science of Thermodynamics”, the paper which was my starting point here.

Following Norton, I’ll use the names “Sadi” and “Lazare” to disambiguate the two Carnot, even though as a fellow frenchman I feel uncomfortable going on a first name basis that quickly.

What did Sadi discover?

My thermodynamics is quite rusty (never liked the subject in college), and I never studied its history. So I at least needed a reminder on what Sadi’s work was all about, and how it fit into the knowledge of its day and the narrative of thermodynamics.

From the Norton paper and Wikipedia, I get that Sadi searched for a general theory of heat engines. Actually, it looks like he introduced the more general concept of heat engine, or at least made the explicit distinction between steam and heat engine, with the former being only a specific form of the latter. What Sadi sought and found was a general enough theory of heat engines to answer questions like the best possible design of a heat engines, or the relevance of using steam vs other gases.

His answer, in a fundamental move establishing a shared frame for the next 200 years of thermodynamics, was that the efficiency of such an engine only depended on two points:

  • The difference of temperature between the hot source and the cold sink
  • How close to reversibility the transfer of heat was in the engine.

That’s a pretty cool example of a shared frame simplifying a whole mess of engineering problems into two fundamental points. And the abstractions and proof methods Sadi introduced (thermodynamically reversible processes, Carnot’s theorem, thinking in terms of cycles) proved fundamental for establishing the more secure basis of thermodynamics 50 years later.

Normally with that sort of incredible insight, I expect the path taken to be lost to time, with maybe a couple of pointers here and there. Yet it seems well accepted by historians of science that Sadi’s work was heavily inspired by an analogy to his father (Lazare Carnot) work on a general theory of machine. And Norton makes quite a convincing case of it.

Lazare’s theory of machines and the analogy to heat engines

Norton dedicates a good 16 pages of his paper to explaining Lazare Carnot’s general theory of machines. I don’t plan on going that far, but the gist is important for the analogy.

Basically, Lazare published a theory about what mattered the most in designing efficient machines, where machines are broadly speaking constructions that transmit work or “motive power” as Lazare wrote. This includes levers, winches, pulleys, but also hydraulic and pneumatic machines.

This is the first part of the analogy Norton builds: Lazare showed his son that you could unify a whole class of systems within one theory. He thus showed him that going to the abstract was a possible fruitful move.

What was this theory? Well, Lazare showed in his (apparently quite badly written) essay that the most efficient machines are those for which the percussive shocks between parts are minimized.

That doesn’t look that much like Sadi’s later thermodynamical work, until we look at Lazare characterization of motions that minimize these shocks, the so-called “geometric motions”. Honestly, I had trouble following Norton’s analysis of these geometric motions, and he implies that Lazare’s writing is even worse. The important point seems to be that Lazare characterizes them in terms of reversibility: geometric motions can be reversed, like rotation around a center of two weights linked by a taut wire.

So in this characterization of most efficient movements of machine parts through reversibility, Norton sees the seeds of thermodynamically reversible processes, which analogously characterize the design of optimal heat engines in Sadi’s work.

Another aspect of Lazare’s work that Norton highlights is its fundamental dissipative ontology: Lazare works within a mechanics of hard bodies with inelastic shocks and loss of “energy” (technically the concept wasn’t really formalized, but that’s what it amounts to). Such an ontology was ill fated in mechanics, as I gather from the fundamental nature of basically all conservation laws I’ve heard of.

Yet this mistake might have served his son and Science incredibly well; Norton argues that

Whatever may have directed Lazare’s choice of this ill-fated dissipative ontology, it was most fortuitous for Sadi. For Lazare mapped out ways of understanding systems that are inherently dissipative. When Sadi turned to analyze just such a system, heat engines, he had available to him the model of Lazare’s work. He could copy its ways and methods and, using them, devise the basis of what becomes the modern theory of thermodynamics.

To summarize, Norton draws an analogy in three points between Lazare and Sadi’s work:

  • The goal of a general theory for a whole class of systems
  • A dissipative ontology
  • An approach to characterizing the most efficient systems in such an ontology, based on reversibility

Surprising benefits of this analogy

Why is this exciting? Because that highlights a probable direct path Sadi took in proposing one of the most important concepts in thermodynamics: thermodynamically reversible processes. Now, there is no direct quote of Sadi referencing his father’s work by name, but Norton has a bunch of quotes where Sadi basically reexplains and restates Lazare’s work as his example and starting point.

(From Sadi’s Reflexions)

Machines which do not receive their motion from heat, those which have for a motor the force of men or of animals, a waterfall, an air-current, etc., can be studied even to their smallest details by the mechanical theory. All cases are foreseen, all imaginable movements are referred to these general principles, firmly established,and applicable under all circumstances. This is the character of a complete theory.

[...]

A similar theory is evidently needed for heat-engines. We shall have it only when the laws of Physics shall be extended enough, generalized enough, to make known beforehand all the effects of heat acting in a determined manner on any body.

I for one find this analogy quite convincing. And it helps explain why Sadi thought about such a counterintuitive idea as thermodynamically reversible processes — they are the thermodynamical analogous to Lazare’s geometric motions.

Norton definitely stresses out the weirdness of reversible processes a lot. He’s not content with just calling them ideal processes or thought experiments, because they rely on logical contradiction: they must constantly be at equilibrium (or infinitesimally close to equilibrium) while changing. He has a whole paper on it called “The Impossible Process”!

I’m not sure if I agree with Norton’s analysis, but the idealized nature of reversible processes, their importance (for example in proving Carnot’s theorem) and the surprising aspect of their invention seems widely recognized. And it all apparently follows naturally from Lazare’s theory!

Even more fascinating, Sadi drew from Lazare’s work a concept (reversibility) that should by all means break when going from the mechanical to the thermodynamical. Lazare’s reversible processes can be in a state of equilibrium (from percussive shocks) while moving because of inertia; but that doesn’t work for thermal processes. This is a part where the analogy should break, as Norton writes:

This, then, is a significant disanalogy between Lazare’s machines and Sadi’s engines. Realization of the most efficient processes for Lazare’s machines does not contradict the laws of mechanics. Realization of the most efficient processes of Sadi’s engines, however, does contradict basic thermal laws.

And yet, that was the right move to make! Pushing the analogy “too far” led to a fundamental building block of thermodynamics, itself one of the main foundations of modern physics.

Other epistemic curiosities

A bunch of other aspects of Norton’s paper titillated my curiosity, even if I haven’t yet gone deeply into any. I’m not promising I will, but they’re definitely on my mind and in my reading list.

Epistemic Analysis of Thermodynamically Reversible Processes

Already mentioned that one above, but Norton and other philosophers of science have a whole pan of the literature on analyzing the weirdness of reversible processes as a concept in thermodynamics, and how to make sense of them in light of both their logical contradiction and their fruitfulness.

The starting point would be two of Norton’s papers on the subject, and the reviews/responses in the philosophy of science literature. Plus some digging into the actual thermodynamics to get a better grasp of the use of these processes.

Impossibility of perpetual motion

Sadi’s main result, Carnot’s theorem, relies on proof by contradiction leading to perpetual motion (and heat) machines, which are considered impossible. That surprised me, because I was expecting the theorem to prove or give grounding for the impossibility of perpetual motion machines. So I became more curious of where that comes from. It sounds like there’s a path through the Second Law of Thermodynamics and Noether’s theorem, but I haven’t followed it yet.

And with the relevance of this impossibility result to physics and to Yudkwoskian’s analogies about the security mindset, that sounds like a great thing to clarify.

Caloric fluid, an interesting mistake?

One aspect of Sadi’s work that I haven’t discussed here is that he subscribed to the caloric theory of heat, which sees heat as a self-repellent fluid. This has been superseded by the kinetic theory later in the history of physics, in part because the caloric theory couldn’t deal with conservation of energy and the second law (at least that’s what Norton and Wikipedia says).

Where Norton and Wikipedia disagree is on the fruitfulness of this ontological mistake. Wikipedia describes Sadi’s accomplishment as “despite” the caloric theory, whereas Norton argues that seeing heat as a fluid pointed the way to the analogy with Lazare’s work, and also made obvious the important of the entry and exit points of heat, which end up being the only two part that matter for the efficiency limit of a heat engine.

That looks like a fascinating example of an interesting or fruitful mistake, which paved the way to getting things actually right.

Sadi’s abstraction carried over to the next paradigms

More generally, Sadi introduced a bunch of concepts and intuitions (reversible processes, his results, and his proof method for Carnot’s theorem) which resisted multiple paradigm changes and reformalization of the underlying physics. That’s a great example of principles that carry over to the next paradigm, as John writes.

Who wouldn’t want to understand better how to be so right with such imperfect foundations?

Conclusion

If we follow Norton (and I'm quite convinced by his case), Sadi got a clear inspiration and analogy from his father's work, which helped tremendously in making his theory of heat engines so powerful and eventually right.

Yet that doesn't solve our initial problem (how did Sadi discover his result?); it only refines the questions. Sure, he might have leveraged Lazare's analogy, but why did that work? Why is Lazare work so productive when applied to thermodynamics, when it's a dead end in its original field (mechanics)? And how come that the analogy leads to the right insight when it actually breaks?

Eventually I aim for epistemic tools that refine, correct and distill the underlying mental moves so that we can more easily emulate this type of intellectual progress. The first step is to realize that there's a confusion here: something that doesn't fit with our nice and clean narratives of scientific progress. And then investigate and make sense of it.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 4:43 PM

It sounds like there’s a path through the Second Law of Thermodynamics and Noether’s theorem, but I haven’t followed it yet.

There is indeed a path. Note a few potential "loopholes":

  1. There are (unphysical) Newtonian physics systems where it is possible to approach negative-infinite potential energy in finite time. So yes, strictly speaking energy is conserved, but that doesn't actually say that much.
    1. (For instance: https://en.wikipedia.org/wiki/Painlev%C3%A9_conjecture#/media/File:Xia's_5-body_configuration.png )
      1. (Roughly speaking: the top 2 bodies and the center body undergo a 3-body encounter that drops the top 2 bodies into a smaller orbit, using the resulting potential energy to accelerate the top 2 bodies upward and accelerates the middle body towards the bottom 2 bodies faster than it arrived. Repeat, mirrored, with the bottom 2 bodies. Repeat, mirrored, with the top 2 bodies. Repeat, mirrored, with the lower 2 bodies. Etc. Each loop pulls gravitational potential energy from the 2 sets of 2 bodies and dumps it into kinetic energy, and the center body gets faster faster than the 2 sets of 2 bodies pull apart. Net result is an infinite number of 3-body encounters and infinite velocity in finite time...)
  2. It relies on (continuous) time-translation symmetry.
    1. This doesn't hold in general for general relativity.
    2. This does hold for Newtonian mechanics.
    3. Time translation symmetry is a hypothesis, although a fairly well-tested one.
    4. (If you want to rabbit-hole here, look at time crystals.)

(That being said, it's been too long since I've looked seriously into Physics.)

Thanks for the comment!

Could you give more details on the path itself?

Also, do you consider your loopholes like technicalities, or more serious problems?

Could you give more details on the path itself?

...honestly, probably not well. It's been too long. At a high level: Noether's theorem implies that if you have a Lagrangian that's invariant under a perturbation of coordinates, that corresponds to a conserved quantity of the system. In particular: invariance under time perturbations (a.k.a. continuous time-translation symmetry) corresponds to a conserved quantity that turns out to be conservation of energy.

Also, do you consider your loopholes like technicalities, or more serious problems?

For 1: it's like someone showing how to break your 1024-bit hash in 2^500 operations. It isn't a problem in and of itself, but it's suggestive of deeper problems. (It requires both infinite precision and point particles to achieve, neither of which appear to be actually possible in our universe.)

For 2: I'd consider the issues with general relativity (and however quantum gravity shakes out) to be potentially an issue - though given that it's not an issue for classical mechanics any loopholes would likely be in regimes where the Newtonian approximation breaks down.

That all being said, take this with a grain of salt. I'm not confident I remembered everything correctly.

Yet that doesn't solve our initial problem (how did Sadi discover his result?); it only refines the questions. Sure, he might have leveraged Lazare's analogy, but why did that work? Why is Lazare work so productive when applied to thermodynamics, when it's a dead end in its original field (mechanics)? And how come that the analogy leads to the right insight when it actually breaks?

Default hypothesis: Lots of people were trying lots of different ways of making progress based on lots of different bad analogies; one happened to work out despite being bad. "Even a broken clock is right twice a day."

Sorry for not answering earlier, there's a lot of things I wanted to say in response to this comment, and I took some time to organize my thoughts.

First, there's an object-level point on which we might agree: I don't particularly believe that Sadi choose his analogy, it just made sense to him. There might be an aspect of this intuition that got hidden bits of evidence unconsciously, which is the sort of thing that I would want to find out how to do consciously and explicitly as much as possible.

Now on the meta level, I believe that your default hypothesis is wrong because it is assuming an incredible amount of structure and is in contradiction with the history of science (and invention in general). You use the analogy of the clock, which has nothing to do with how the universe works; it doesn't go through every possible pattern one after the other, such that any guess will be right some time.

And more generally, impressive results in science and maths and a lot of places come from people even finding something in a high dimensional world. If you have an incredibly large amount of possibility, no amount of unbiased random sampling will yield anything, certainly not the bounty of results we get from science. Even more in a time like the Carnot's, where there wasn't that many scientists at all.

It's basically Einstein's Arrogance and Science in High-Dimensional World: to do anything at all, you need to reveal hidden bits of evidence somehow to reduce the search space.

In this example, I expect that most of the bits of evidence were revealed by Lazare, because even if he had a bad ontology for mechanics (which still works decently well at the macro level, mind you), still created a powerful framework for thinking about dissipative systems. He basically extracted bits of evidence from the whole class of dissipative systems, enough to say "the most efficient will look like that" and reducing the search space tremendously for Sadi later on.

If you want some evidence that this example was not just a random sampling that worked but actually a strongly biased move, there's the fact that Sadi's work got used (after being neglected) 25 years later for the formalization of modern thermodynamics, and despite its age, that's what the founders of modern thermodynamics used. Also most of his result, despite staying in obscurity for at least 10 years, haven't been rediscovered AFAIK (or I expect things like Carnot's theorem to have a name with the multiple inventors in it)

Obviously hypotheses do not just come out of an "unbiased random sampling" process, there are some intuitions that drive them that incorporate tons of evidence that the scientist already has.

I thought you were saying something along the lines of: "some people seem particularly good at this, instead of producing hypotheses that have a 1/1000 chance of being correct, they instead produce hypotheses with a 1/2 chance of being correct. Let's look at these people in particular and figure out how to replicate their reasoning".

I'm saying in response to that (which may not be what you meant): "In the specific case of Carnot's theorem, my default hypothesis is that ~1000 people tried hypotheses with probability ~1/1000 and one happened to be correct; you can study any of those 1000 people / ideas instead of studying Carnot in particular. (Studying the wrong ones is probably better, the wrong parts could tell you what people can't do when creating hypotheses in advance.)"

I believe that your default hypothesis is wrong because it is assuming an incredible amount of structure and is in contradiction with the history of science (and invention in general).

I wasn't trying to give a grand theory of science and invention. I'm trying to explain the specific question I quoted, about why a seemingly "bad" analogy still worked out well in this case. 

I also don't know what you think the hypothesis is in contradiction with.

If you have an incredibly large amount of possibility, no amount of unbiased random sampling will yield anything, certainly not the bounty of results we get from science.

I totally agree it was biased in the sense that "dissipative theory" is a lot simpler than "on Sundays, my experiments do whatever Abraham Lincoln would have predicted would happen; on other days it's whatever George Washington would have predicted", and so people investigated the theories like the former much more than theories like the latter.

If you want some evidence that this example was not just a random sampling that worked but actually a strongly biased move, there's the fact that Sadi's work got used (after being neglected) 25 years later for the formalization of modern thermodynamics, and despite its age, that's what the founders of modern thermodynamics used. Also most of his result, despite staying in obscurity for at least 10 years, haven't been rediscovered AFAIK (or I expect things like Carnot's theorem to have a name with the multiple inventors in it)

I expect to see this result in a random sampling world; why don't you? It seems like you just have to wait for the same random sample to be drawn again; not drawing that sample in 25 years seems totally normal.

Thanks for the detailed answer!

I thought you were saying something along the lines of: "some people seem particularly good at this, instead of producing hypotheses that have a 1/1000 chance of being correct, they instead produce hypotheses with a 1/2 chance of being correct. Let's look at these people in particular and figure out how to replicate their reasoning".

I'm saying in response to that (which may not be what you meant): "In the specific case of Carnot's theorem, my default hypothesis is that ~1000 people tried hypotheses with probability ~1/1000 and one happened to be correct; you can study any of those 1000 people / ideas instead of studying Carnot in particular. (Studying the wrong ones is probably better, the wrong parts could tell you what people can't do when creating hypotheses in advance.)"

I feel like you're getting my point, but I'll still add the subtlety that I'm saying "anyone who isn't biased somehow has a chance of 10^-60, and so always fails". I'm still confused by why you think that you're proposal is more realistic? Could you give me your intuition here for the uniform sampling case? Or is it just that by default you go for this model?

I wasn't trying to give a grand theory of science and invention. I'm trying to explain the specific question I quoted, about why a seemingly "bad" analogy still worked out well in this case. 

I also don't know what you think the hypothesis is in contradiction with.

Contradiction with the fact that many discoveries and inventions seem to emerge in cases where the possibility space was far too large for a uniform sampling to have a chance.

I totally agree it was biased in the sense that "dissipative theory" is a lot simpler than "on Sundays, my experiments do whatever Abraham Lincoln would have predicted would happen; on other days it's whatever George Washington would have predicted", and so people investigated the theories like the former much more than theories like the latter.

I agree with that, but I meant more that dissipative theory was biased towards the truth compared to theories that would have been considered at the same level.

I expect to see this result in a random sampling world; why don't you? It seems like you just have to wait for the same random sample to be drawn again; not drawing that sample in 25 years seems totally normal.

When I look at my confusion here, it's because the point I was making is that in 25 years people have rediscovered and recreated the same stuff about steam engine a lot (haven't checked deeply but would be willing to bet), whereas they hadn't found Sadi's result again. Which to me is clear evidence that the sampling, if random, was not uniform at all. Does that answer your question, or am I missing you point completely?

Could you give me your intuition here for the uniform sampling case?

A bad analogy led to a good theory. This seems more probable under theories that involve luck than theories that involve skill. Hence, 1000 people with 1/1000 probability theories, rather than 2 people with 1/2 probability theories. Again, this is for this specific case, not for science as a whole.

I don't think the literal uniform theory is actually correct; there's still going to be differences in people's ability, so that it's more like 10,000 people with ~0 probability theories, 1000 people with 1/2000 probability theories, and 100 people with 1/200 probability theories. But the fundamental point is that I don't expect to gain much by studying the people who got it right than by studying the people who got it wrong in a plausible way (and if anything I expect you to learn more from the latter category).

Contradiction with the fact that many discoveries and inventions seem to emerge in cases where the possibility space was far too large for a uniform sampling to have a chance.

Do you agree there's no contradiction now that I've specified that it's sampling from a biased distribution of ideas that have ~1/1000 probability?

I meant more that dissipative theory was biased towards the truth compared to theories that would have been considered at the same level.

Yeah I think it's unclear why that should be true. (Assuming that by "at the same level" you mean theories that were posed by other scientists of comparable stature seeking to explain similar phenomena.)

When I look at my confusion here, it's because the point I was making is that in 25 years people have rediscovered and recreated the same stuff about steam engine a lot (haven't checked deeply but would be willing to bet), whereas they hadn't found Sadi's result again. Which to me is clear evidence that the sampling, if random, was not uniform at all.

How is it clear evidence? Imagine a "uniform random sampling" story in which we produce 10 theories of probability 1/1000 per year. Then in expectation it takes 100 years to produce the right theory, and it is entirely unsurprising that in 25 years people don't rediscover the right theory. So how are you using the observation "not rediscovered in 25 years" to update against "uniform random sampling"?

My take: if you are somehow going from the "real" prior probability (i.e. the figure for a true random draw from the uniform distribution on the hypothesis space, which Adam estimated in his comment as 10^-60, although I expect it could be even lower depending on exactly what hypothesis space we're talking about) all the way to 10^-3 (the 1/1000 figure you give), you are already jumping a large number of orders of magnitude, and it seems to me unjustified to assert you can only jump this many orders of magnitude, but no further. Indeed, if you can jump from 10^-60 to 10^-3, why can you not in principle jump slightly farther, and arrive at probability estimates that are non-negligible even from an everyday perspective, such as 10^-2 or even 10^-1?

And it seems to me that you must be implicitly asserting something like this, if you give the probability of a random proposed theory being successful as 1 in 1000 rather than 1 in 10^60. Where did that 1/1000 number come from? It certainly doesn't look to me like it came out of any principled estimate for how much justified Bayesian update can be wrung out of the evidence historically available, where that estimate just happened to arrive at ~570 decibels but no more; in fact it seems like that 1000 number basically was chosen to roughly match the number of hypotheses you think were plausibly put forth before the correct one showed up. If so, then this is... pretty obviously not proper procedure, in my view.

For myself, I basically find Eliezer's argument in Einstein's Speed as convincing as I did when I first read it, and for basically all the same reasons: finding the right theory and promoting it to the range where it first deserves attention but before it becomes an obvious candidate for most of the probability mass requires hitting a narrow target in update-space, and humans are not in general known for their precision. With far greater likelihood, if somebody identified the correct-in-retrospect theory, the evidence available to them at the time was sufficient from a Bayesian perspective to massively overdetermine that theory's correctness, and it was only their non-superintelligence that caused them to update so little and so late. Hitting a narrow range is implausible; overshooting that range, on the other hand, significantly less so.

At this point you may protest that the 1/1000 probability you give is not meant as an estimate for the actual probability a Bayes-optimal predictor would assign after updating on the evidence; instead it's whatever probability is justified for a human to assign, knowing that they are likely missing much of the picture, and that this probability is bounded from above at 10^-3 or thereabouts, at least for the kind of hard scientific problems the OP is discussing.

To be blunt: I find this completely unpersuasive. Even ignoring the obvious question from before (why 10^-3?), I can see no a priori reason why someone could not find themselves in an epistemic state where (from the inside at least) the evidence they have implies a much higher probability of correctness. From this epistemic state they might then find themselves producing statements like

I believe myself to be writing a book on economic theory which will largely revolutionize—not I suppose, at once but in the course of the next ten years—the way the world thinks about its economic problems. I can’t expect you, or anyone else, to believe this at the present stage. But for myself I don’t merely hope what I say—in my own mind, I’m quite sure.

—John Maynard Keynes

statements which, if you insist on maintaining that 10^-3 upper bound (and why so, at this point?), certainly become much harder to explain without resorting to some featureless "overconfidence" thingy; and that has been discussed in detail.

Again, I'm not claiming that this is true in general. I think it is plausible to reach, idk, 90%, maybe higher, that a specific idea will revolutionize the world, even before getting any feedback from anyone else or running experiments in the world. (So I feel totally fine with the statement from Keynes that you quoted.)

I would feel very differently about this specific case if there was an actual statement from Sadi of the form "I believe that this particular theorem is going to revolutionize thermodynamics" (and he didn't make similar statements about other things that were not revolutionary).


it seems like that 1000 number basically was chosen to roughly match the number of hypotheses you think were plausibly put forth before the correct one showed up. If so, then this is... pretty obviously not proper procedure, in my view.

I totally agree that's what I did, but it seems like a perfectly fine procedure. Idk where the disconnect is, but maybe you're thinking of "1000" as coming from a weirdly opinionated prior, rather than from my posterior.

From my perspective, I start out having basically no idea what the "justifiable prior" on that hypothesis is. (If you want, you could imagine that my prior on the "justifiable prior" was uniform over log-10 odds of -60 to 10; my prior is more opinionated than that but the extra opinions don't matter much.) Then, I observe that the hypothesis we got seems to be kinda ad hoc with no great story even in hindsight for why it worked while other hypotheses didn't. My guess is then that it was about as probable (in foresight) as the other hypotheses around at the time, and combined with the number of hypotheses (~1000) and the observation that one of them worked, you get the probability of 1/1000.

(I guess a priori you could have imagined that hypotheses should either have probability approximately 10^-60 or approximately 1, since you already have all the bits you need to deduce the answer, but it seems like in practice even the most competent people frequently try hypotheses that end up being wrong / unimportant, so that can't be correct.)


As a different example, consider machine learning. Suppose you tell me that <influential researcher> has a new idea for RL sample efficiency they haven't tested, and you want me to tell you the probability it would lead to a 5x improvement in sample efficiency on Atari. It seems like the obvious approach to estimate this probability is to draw the graph of how much sample efficiency improved from previous ideas from that researcher (and other similar researchers, to increase sample size), and use that to estimate P(effect size > 5x | published), and then apply an ad hoc correction for publication bias. I claim that my reasoning above is basically analogous to this reasoning.