Epistemic status: I wrote this post quickly, and largely to solicit feedback on the claims I make in it. This is because (a) I’m not sure about these claims (or how I’ve explained them), and (b) the question of what I should believe on this topic seems important in general and for various other posts I’m writing. (So please comment if you have any thoughts on this!)

I’ve now read a bunch on topics related to the questions covered here, but I’m not an expert, and haven’t seen or explicitly looked for a direct treatment of the questions covered here. It’s very possible this has already been thoroughly and clearly covered elsewhere; if so, please comment the link!

I basically accept a Bayesian interpretation of probability, "in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief" (Wikipedia). Relatedly, I think I accept the idea that we can always assign probabilities to propositions (or at least use something like an uninformative prior), and "make sense of" these probabilities, even if sometimes we have incredibly little basis for making those probability estimates.

This idea seems to be disputed fairly often, and this seems related to the concept of a distinction between "risk" and "uncertainty" (which I think is a confused concept). I think the arguments against this idea are flawed. But I want to test my beliefs and properly engage with those arguments. So in this post, I first discuss how I believe we can arrive at, and make sense of, probability estimates in what are sometimes put forward as "challenging cases", before discussing what I think is probably the most challenging type of case: what I call "supernatural-type" claims.

Weak examples of "Knightian uncertainty"

Sometimes people propose what seem to me to be very weak examples of cases in which, they propose, we simply cannot arrive at probability estimates. (This may be simply a result of them having a frequentist interpretation of probability, but this often doesn't seem to be made explicit or defended.) Here's one example:

there are situations with so many unique features that they can hardly be grouped with similar cases, such as the danger resulting from a new type of virus, or the consequences of military intervention in conflict areas. These represent cases of (Knightian) uncertainty where no data are available to estimate objective probabilities. While we may rely on our subjective estimates under such conditions, no objective basis exists by which to judge them (e.g., LeRoy & Singell, 1987). (source)

It seems obvious to me that a wealth of data is available for such cases. There have been many viruses and military interventions before. None of those situations will perfectly mirror the situations we’re trying to predict, and that’s definitely a very important point. We should therefore think very carefully about whether we’re being too confident in our predictions (i.e., using too narrow a “confidence interval”[1] and thus not adequately preparing for especially “high” or “low” possibilities).

But we can clearly do better than nothing. To start small, you’d be comfortable with the claim that a new type of virus, if it hits this year, is more likely to kill somewhere between 0 and 1 billion people than somewhere between 1000 and 1001 billion people (i.e., far more than everyone alive), right? And in fact, we have empirical evidence that some people can reliably do better than chance (and better than “0 to 1 billion”) in making predictions about geopolitical events like these, at least over timelines of a few years (from Tetlock’s work).

AGI

What about something that seems more unique or unprecedented, and where we also may have to stretch our predictions further into the future, like artificial general intelligence (AGI) timelines? On that question, experts disagree wildly, and are seemingly quite swayed by things like how the question is asked (Katja Grace on 80k; search for “It’s a bit complicated” in the transcript). This makes me highly unconfident in any prediction I might make on the topic (and thus pushes me towards making decisions that are good given a wide range of possible timelines).

But I believe I know more than nothing. I believe I can reasonably assign some probability distribution (and then use something like the median or mean of that as if it were a point estimate, for certain purposes). If that seems like raw hubris, do you think it’s worth actually behaving as if AGI is just as likely to be developed 1 minute from now as somewhere around 2 to 300 years from now? What about behaving as if it’s likely to occur in some millennium 50 quintillion years from now, and not in this millennium? So you’d at least be fairly happy bounding your probability distribution somewhere in between those points 1 minute from now and 50 quintillion years from now, right?

One could say that all I’ve done there is argue that some probabilities we could assign would seem especially outrageous, not that we really can or should assign probabilities to this event. But if some probabilities are more reasonable than others (and it certainly seems they are, though I can’t prove it), then we can do better by using those probabilities than by using something like an uninformative prior.[2] And as far as I’m aware, principles for decision making without probabilities essentially collapse to acting as if using an uninformative prior or predictably lead to seemingly irrational and bad decisions (I’ll be posting about this soon).

And in any case, we do have relevant data for the AGI question, even if we’ve never developed AGI itself - we have data on AI development more broadly, development related to computing/IT/robotics more broadly, previous transformative technologies (e.g., electricity), the current state of funding for AI, current governmental stances towards AI development, how funding and governmental stances have influenced tech in the past, etc.

Supernatural-type claims

But that leads me to what does seem like it could be a strong type of counterexample to the idea that we can always assign probabilities: claims of a “supernatural”, “metaphysical”, or “unobservable” nature. These are very fuzzy and debatable terms, but defining them isn’t my main purpose here, so instead I’ll just jump into some examples:

  1. What are the odds that “an all-powerful god” exists?
  2. What are the odds that “ghosts” exist?
  3. What are the odds that “magic” exists?
  4. What are the odds that “non-naturalistic moral realism” is correct (or that “non-natural objective moral facts” exist)?[3]

To me, and presumably most LessWrong readers, the most obvious response to these questions is to dissolve them, or to at least try to pin the questioner down on definitions. And I do think that's very reasonable. But in this post I want to put my (current) belief that "we can always assign probabilities to propositions (or at least use something like an uninformative prior)" to a particularly challenging test, so from here on I'll assume we've somehow arrived at a satisfactorily precise understanding of what the question is actually meant to mean.

In that case, my intuitions would suggest I should assign a very low probability to each of these propositions.[4] But what basis would I have for that? More specifically, what basis would I have for any particular probability (or probability distribution) I assign? And what would it even mean?

This is Chris Smith’s statement of this apparent issue, which was essentially what prompted this post:

Kyle is an atheist. When asked what odds he places on the possibility that an all-powerful god exists, he says “2%.”

[...] I don’t know what to make of [Kyle’s] probability estimate.

[Kyle] wouldn’t be able to draw on past experiences with different realities (i.e., Kyle didn’t previously experience a bunch of realities and learn that some of them had all-powerful gods while others didn’t). If you push someone like Kyle to explain why they chose 2% rather than 4% or 0.5%, you almost certainly won’t get a clear explanation.

If you gave the same “What probability do you place on the existence of an all-powerful god?” question to a number of self-proclaimed atheists, you’d probably get a wide range of answers.

I bet you’d find that some people would give answers like 10%, others 1%, and others 0.001%. While these probabilities can all be described as “low,” they differ by orders of magnitude. If probabilities like these are used alongside probabilistic decision models, they could have extremely different implications. Going forward, I’m going to call probability estimates like these “hazy probabilities.”

I can sympathise with Smith’s concerns, though I think ultimately we can make sense of Kyle’s probability estimate, and that Kyle can have at least some grounding for it. I’ll now try to explain why I think that, partly to solicit feedback on whether this thinking (and my explanation of it) makes sense.

In the non-supernatural cases mentioned earlier, it seemed clear to me that we had relevant data and theories. We have data on previous viruses and military interventions (albeit likely from different contexts and circumstances), and some relevant theoretical understandings (e.g., from biology and epidemiology, in the virus case). We lack data on a previous completed instance of AGI development, but we have data on cases we could argue are somewhat analogous (e.g., the industrial revolution, development and roll-out of electricity, development of the atomic bomb, development of the internet), and we have theoretical understandings that can guide us in our reference class forecasting.

But do we have any relevant data or theories for the supernatural-type cases?

Assuming that whether the claim is true can affect the world

Let’s first make the assumption (which I’ll reverse later) that these propositions, if true, would at some point have at least some, theoretically observable consequences. That is, we’ll first assume that we’re not dealing with an utterly unverifiable, unfalsifiable hypothesis, the truth of which would have no impact on the world anyway (see also Carl Sagan’s dragon).[5] This seems to be the assumption Smith is making, as he writes “Kyle didn’t previously experience a bunch of realities and learn that some of them had all-powerful gods while others didn’t”, implying that it would be theoretically possible to learn whether a given reality had an all-powerful god.

That assumption still leaves open the possibility that, even if these propositions were true, it’d be extremely unlikely we’d observe any evidence of them at all. This clearly makes it harder to assign probabilities to these propositions that are likely to track reality well. But is it impossible to assign any probabilities, or to make sense of probabilities that we assign?

It seems to me (though I’m unsure) that we could assign probabilities using something like the following process:

  1. Try to think of all (or some sample of) the propositions that we know have ever been made that are similar to the proposition in question. This could mean something like one or more of the following:

    • All claims of a religious nature.
    • All claims that many people would consider “supernatural”.
    • All claims where no one really had a particular idea of what consequences we should expect to observe if they were true rather than false. (E.g., ghosts, given that they’re often interpreted as being meant to be invisible and incorporeal.)
    • All claims that are believed to roughly the same level by humanity as a whole or by some subpopulation (e.g., scientists).
  2. Try to figure out how many of these propositions later turned out to be true.

    • This may require debating what counts as still being the same proposition, if the proposition was originally very vague. For example, does the ability to keep objects afloat using magnets count as levitation?
  3. Do something along the lines of reference class forecasting using this “data”.

    • This’ll likely require deciding whether certain data points count as a relevant claim turning out to not be true versus just not yet turning out to be true. This may look like inside-view-style thinking about roughly how likely we think it’d be that we’d have observed evidence for that claim by now if it is true.
    • We might do something like giving some data points more or less “weight” depending on things like how similar they seem to the matter at hand or how confident we are in our assessment of whether that data point “turned out to be true” or not. (I haven’t thought through in detail precisely how you’d do this. You might instead construct multiple separate reference classes, and then combine these like in model combination, giving different weights to the different classes.)
  4. If this reference class forecasting suggests odds of 0%, this seems too confident; it seems that we should never use probabilities of 0 or 1. It seems that one option for handling this would be Laplace’s solution to the rule of succession.

    • For example, if we found that 18 out of 18 relevant claims for which we “have data” “turned out to be false”, our reference class forecast might suggest there’s a 100% chance (because 18/18=1) that the claim under consideration will turn out to be false too. To avoid this absolute certainty, we add 1 to the numerator and 2 to the denominator (so we do 19/20=0.95), and find that there seems to be a 95% chance the claim under consideration will turn out to be false too.
    • There may be alternative solutions too, such as letting the inside view considerations introduced in the next step move one away from absolute certainty.
  5. Construct an "inside view" relevant to how likely the claim is to be true. This may involve considerations like:

    • Knowledge from other fields (e.g., physics), and thinking about how consistent this claim is with that knowledge (and perhaps also about how well consistency with knowledge from other fields has predicted truth in the past).
    • The extent to which the claim violates Occam’s razor, and how bad it is for a claim to do so (perhaps based on how well sticking to Occam’s razor has seemed to predict the accuracy of claims in the past).
    • Explanations for why the claim would be made and believed as widely as it is even if it isn’t true. E.g., explanations from the evolutionary psychology of religion, or explanations based on memetics.
  6. Combine the reference class forecast and the inside view somehow. (Perhaps qualitatively, or perhaps via explicit model combination.)

I don’t expect that many people actually, explicitly use the above process (I personally haven’t). But I think it’d be possible to do so. And if we want to know “what to make of” probability estimates for these sorts of claims, we could perhaps think of what we actually do, which is more implicit/intuitive, as "approximating" that explicit process. (But that’s a somewhat separate and debatable claim; my core claims are consistent with the idea that in practice people are coming to their probability assignments quite randomly.)

Another, probably more realistic way people could arrive at probability estimates for these sorts of claims is the following:

  1. Do some very vague, very implicit version of the above.

    • E.g., just “thinking about” how often things “like this” have seemed true in the past (without actually counting up various cases), and “thinking about” how likely the claim seems to you, when you bear in mind things like physics and Occam’s razor.
  2. Then introspect on how likely this claim “feels” to you, and try to arrive at a number to represent that.

    • One method to do so is Hubbard’s “equivalent bet test” (described here).

Many people may find that method quite suspicious. But there’s evidence that, at least in some domains, it’s possible to become fairly “well calibrated” (i.e., do better than chance at assigning probability estimates) following “calibration training” (see here and here). Ideally, the person using that method would have engaged in such calibration training before. If they have, they might add a third step, or add as part of step 2, an adjustment to account for them tending to over- or underestimate probabilities (either probabilities in general, or probabilities of roughly this kind).

I’m not aware of any evidence regarding whether people can become well-calibrated for these “supernatural-type claims”. And I believe there’s somewhat limited evidence on how well calibration training generalises across domains. So I think there are major reasons for skepticism, which I’d translate into large confidence intervals on my probability distributions.

But I’m also not aware of any extremely compelling arguments or evidence indicating that people wouldn’t be able to become well-calibrated for these sorts of claims, or that calibration training wouldn’t generalise to domains like this. So for now, I think I’d say that we can make sense of probability estimates for claims like these, and that we should have at least a very weak expectation that methods like the above will result in better probability estimates than if we acted as though we knew nothing at all.

Assuming that whether the claim is true can't affect the world

I think the much trickier case is if we assume that the truth of these claims would never affect the (natural/physical/whatever) world at all, and would thus never be observable. I think the standard rationalist response to this possibility is dismissiveness, and the argument that, under those conditions, whether or not these claims are true is an utterly meaningless and unimportant question. The claims are empty, and not worth arguing about.

I find this response very compelling, and it’s the one I’ve typically gone with. I think that, if we can just show that probabilities can be meaningfully assigned to all claims that could ever theoretically affect the natural world at all, that’s probably good enough.

But what if, for the sake of the argument, we entertain the possibility that some claims may never affect the natural world, and yet still be important? Me not dismissing that possibility outright and immediately may annoy some readers, and I can sympathise with that. But it seems to me at least interesting to think about. And here’s one case where that possibly actually does seem to me like it could be important:

What if non-naturalistic moral realism is “correct”, and what that means is that “moral facts” will never affect the natural world, and will thus never be observable, even in principle - but our actions are still somehow relevant to these moral facts? E.g., what if it could be the case that it’s “good” for us to do one thing rather than another, in some sense that we “really should” care about, and yet “goodness” itself leaves no trace at all in the natural world? (This could perhaps be something like epiphenomenalism, but here I’m going quite a bit beyond what I really know.)

In this case, I think reference forecasting is useless, because we’d never have any data on the truth or falsehood of any claims of the right type.

But at first glance, it still seems to me like we may be able to make some headway using inside views, or something like arriving at a “feeling” about the likelihood and then quantifying this using the equivalent bet test. I’m very unsure about that, because usually those methods should rely on at least some, somewhat relevant data. But it seems like perhaps we can still usefully use considerations like how often Occam’s razor has worked well in the past.

And this also reminds me of Scott Alexander’s post on building intuitions on non-empirical arguments in science (additional post on that here). It also seems reminiscent of some of Eliezer Yudkowsky’s writing on the many-worlds interpretation of quantum mechanics, though I read those posts a little while ago and didn’t have this idea in mind at the time.[6]

Closing remarks

This quick post has become longer than planned, so I’ll stop there. The basic summary is that I tentatively claim we can always assign meaningful probabilities, even to supernatural-type (or even actually supernatural) claims. I’m not claiming we should be confident in these probabilities, and in fact, I expect many people should massively reduce their confidence in their probability estimates. I’m also not claiming that the probabilities people actually assign are reliably better than chance - that’s an empirical question, and again there’d likely be issues of overconfidence.

As I said at the start, a major aim of this post is to get feedback on my thinking. So please let me know what you think in the comments.


  1. See this shortform post of mine for other ways of describing the idea that our probabilities might be relatively “untrustworthy”. ↩︎

  2. I think that my “1 minute” example doesn’t demonstrate the superiority of certain probability distributions to an uninformative prior. This is because we could argue that the issue there is that “1 minute from now” is far more precise than “2 to 300 years from now”, and an uninformative prior would favour the less precise prediction, just as we’d like it too. But I think my other example does indicate, if our intuitions on that are trustworthy, that some probability distributions can be superior to an uninformative prior. This is because, in that example, predictions mentioned spanned the same amount of time (a millennium), just starting at different points (~now vs ~50 quintillion years from now). ↩︎

  3. These terms can be defined in many different ways. Footnote 15 of this is probably a good quick source. This page is also relevant, but I’ve only skimmed it myself. ↩︎

  4. Though in the case of non-naturalistic moral realism, I might still act as though it’s correct, to a substantial extent, based on a sort of expected value reasoning or Pascal’s wager. But I’m not sure if that makes sense, and it’s not directly relevant for the purposes of this post. (I hope to write a separate post about that idea later.) ↩︎

  5. I acknowledge that this may mean that these claims aren’t “actually supernatural”, but they still seem like more-challenging-than-usual cases for the idea that we can always assign meaningful probabilities. ↩︎

  6. To be clear, I’m not necessarily claiming that Alexander or Yudkowsky would approve of using this sort of logic for topics like non-naturalistic moral realism or the existence of a god, rather than just dismissing those questions outright as meaningless or utterly inconsequential. I’m just drawing what seems to me, from memory, some potential connections. ↩︎

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 9:37 AM

I think this is possibly rehashing the main point of disagreement between frequentists and subjectivists, i.e. whether or not probability is only sensible after the fact or if it is also meaningful to talk about probabilities before any data is available. I'm not sure this debate will ever end, but I can tell you that LW culture leans subjectivists, specifically along Bayesian lines.

Ah, your comment, and those of jmh and Dagon, have made me realise that I didn’t make it clear that I was taking a Bayesian/subjectivist interpretation of probability as a starting assumption (probably because I wrote this quickly and I know LessWrong leans that way). My intention was not really to engage in a Bayesian vs frequentist debate, as I feel that's been adequately done elsewhere, but instead to say, "Let's assume the Bayesian interpretation, and then try put it up against what seem like particularly challenging cases for the idea that someone can arrive at a meaningful probability estimate, and think about how one might arrive at an estimate even then, and what that might mean."

And by "what that might mean", I don't mean just "Kyle thinks there's a 0.001% chance a god exists", but rather something like how we should interpret why Kyle gave that number rather than something orders of magnitude higher or lower (but that still matches the fuzzy, intuitive notion of "very low odds" which is perhaps all Kyle's introspection on his gut feeling would give him), and how meaningful it is that he gave that particular number, rather than some other number.

The broader context is that I was working on a post about what the distinction between risk and uncertainty (now posted here), and came across Chris Smith's example of Kyle the atheist. And I wanted to sort of take up the challenge implicit in what Smith wrote, and then take up the further challenge of a version of that claim that might never affect the world at all, such that we'd never get any data on it or, arguably, everything in its most obvious reference class. In that case, let's still agree we're sticking with the Bayesian interpretation, but then ask, does one's specific choice of subjective credence really mean?

But I definitely should’ve made it more explicit that I was assuming the Bayesian interpretation and that those were the purposes of my post. I've now edited the intro to hopefully fix those issues.

(In case anyone's for some reason interested in reading the comments in context, here's what the post was originally like.)

Hi Michael,

Thanks for writing this! I'm glad to see my post getting engagement, and I wish I'd joined the discussion here sooner.

I feel like my argument got a strawmanned (and I don't think you did that intentionally). I fully agree with this bit:

"Methods like the above will result in better probability estimates than if we acted as though we knew nothing at all."

I think it's entirely reasonable for someone to say: "I feel safe walking out the door because I think there's an extremely low probability that Zeus will strike me down with a thunderbolt when I walk outside."

What I object to is the idea that reasonable people do (or in some sense ought to) make sense of all uncertainty in terms of probability estimates. I think combining hazy probability estimates with tools like probabilistic decision theory will generally have bad consequences.

I very much agree with Dagon's comment:

Models are maps. There's no similarity molecules or probability fields that tie all die rolls together. It's just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we're considering, they share some abstract properties in our models.

Hi Chris :)

Sorry you felt your argument got strawmanned. Perhaps I should've been clearer about the relationship between this post and yours. Basically, the Kyle example in your post prompted me to consider the questions in this post, and see if I could test the idea of assigning and making sense of probabilities against cases like those. It was more like that example in your post was a jumping off point, which I decided to take as throwing down a gauntlet I could interesting put my own ideas up against. It wasn't like I saw this post as focusing on arguing against any core claims in yours.

This is part of why I actually mention you and your post only a couple times, and don't say things like "I think Smith is wrong", but rather "I think ultimately we can make sense of Kyle’s probability estimate, and that Kyle can have at least some grounding for it." Which doesn't directly conflict with what you say in that quote. You say you don't know what sense to make of Kyle's estimate, not that we can't make sense of his estimate. I suggest one way we could make sense of estimates in situations like that one, though I note that that's not likely to be what people actually are doing.

So to what seems to me to be the core claim in your post, and definitely the claim you emphasise in this comment, the main thing I say in this post is:

I’m not claiming we should be confident in these probabilities, and in fact, I expect many people should massively reduce their confidence in their probability estimates. I’m also not claiming that the probabilities people actually assign are reliably better than chance - that’s an empirical question, and again there’d likely be issues of overconfidence.

Which I think can be consistent with your view, though it doesn't take a strong stance on it.

Does this clear up how I see the relationship between your post and this one? Or is there something that still feels strawmanny in here?

In any case, I did write a different post that actually does (sort-of, tentatively, and with some nuance) disagree in one section with what I think you were arguing for in your optimizer's curse post. (Though it also accepts some of your claims as true and important, and it was because of your post that I learned about the optimizer's curse, so I'm grateful for that.) I'd be quite interested in your thoughts on that post (regarding how I captured your view, what you think about my view in that section, and to be honest the rest of the post too). The post is here.

(Also, as a meta point, a major goal with most of what I write is to try to capture as clearly as possible what I think is true, and then see what people say about it, so that I can learn from that myself. I suspect I'll always do this, but it's especially the case at the moment, as I'm relatively new to EA and don't have a background in economics, decision theory, philosophy, etc. This also means that there'll probably be a positive correlation between 1) the length of my comments/posts somewhat disagreeing with someone and 2) the degree to which they seem to me to be clever and to have thought something through quite a bit, even if I currently disagree with them. That's because those people are the ones I suspect I'd be most likely to learn from interacting with.)

Just a passing though here. Is probability really the correct term? I wonder if what we do in these types of cases is more an assessment of our confidence in our ability to extrapolate from past experience into new, and often completely different, situations.

If so that is really not a probability about the event we're thinking about -- though perhaps is could be seen as one about our ability to make "wild" guesses (and yes, that is hyperbole) about stuff we don't really know anything about. Event there I'm not sure probability is the correct term.

With regard to the supernatural things, that tends to be something of a hot button to a lot of people I think. Perhaps a better casting would be things we have some faith in -- which tend to be things we must infer rather than have any real evidence providing some proof. I think these change over time -- we've had faith in a number of theories that have been later proven -- electrons for example or other sub atomic particles.

But then what about dark matter and energy? The models seem to say we need that but as yet we cannot find it. So we have faith in the model and look to prove that faith was justified by finding the dark stuff. But one might as why we have that faith rather than being skeptical of the model, even while acknowledging it has proven of value and helped expand knowledge. I think we have better discussion about faith in this context (perhaps) that if we get into religion and supernatural subjects (though arguably we should treat them the same as the faith we have in other models to my view).

Yeah, this seems like we're using "probability" to mean different things.

Probabilities are unavoidable in any rational decision theory. There is no alternative to assigning probabilities to expected experiences conditional on potential actions. https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences .

Going from probability of anticipated experience to more aggregated, hard-to-resolve probabilities about modeled groupings of experiences (or non-experiences) is not clearly required for anything, but is more of a compression of models, because you can't actually predict things at the detailed level the universe runs on.

So the map/territory distinction seems VITAL here. Probability is in the map. Models are maps. There's no similarity molecules or probability fields that tie all die rolls together. It's just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we're considering, they share some abstract properties in our models.


Ah, these two comments, and that of G Gordon Worley III, have made me realise that I didn't at all make explicit that I was taking the Bayesian interpretation of probability as a starting assumption. See my reply to G Gordon Worley III for more on that, and the basic intention of this post (which I've now edited to make it clearer).

(See my other comments for what I meant by probability)

I don't know much about dark matter and energy, but I'd say they're relatively much less challenging cases. I take it that whether they exist or not should already affect the world in observable ways, and also that we don't have fundamental reasons to expect we could never get more "direct observations" of their existence? I could be wrong about that, but if that's right, then that's just something in the massive category of "Things that are very hard to get evidence about", rather than "Things that might, by their very nature, never provide any evidence of their existence or lack of existence." I'd say that's way closer to the AGI case than to the "a god that will literally never interact with the natural world in any way" case. So it seems pretty clear to me that it can be handled with something like regular methods.

My intention was to find a particularly challenging case for arriving at, and making sense of, subjective probabilities, so I wanted to build up to claims where whether they're true or not would never have any impact at all on the world. (And this just happens to end up involving things like religion and magic - it's not that I wanted to cover a hot button topic on purpose, or debate religion, but rather I wanted to debate how to arrive at and make sense of probabilities in challenging cases.)

In practice, I try to understand the generator for the claim. I.e. the experience plus belief structures that lead to a claim like it to make sense to the person. This doesn't address the central problem, and on inspection I guess what I'm doing is trying to reconcile my own intuitive sense of miniscule probability of the claim as stated to me and the much higher probability implied by the form of the claim to them.

Also knightian uncertainty seems relevant but I'm not sure how quantitatively speaking.

In practice, I try to understand the generator for the claim. I.e. the experience plus belief structures that lead to a claim like it to make sense to the person.

I think that makes sense, and it's sort of like what I was interested here is thinking about what the generator could actually be in cases that seem so unlike anything one has actually experienced or had direct evidence of, and, in the most extreme case, something that, by its very nature, would never leave any evidence of its truth or falsity.

Also knightian uncertainty seems relevant but I'm not sure how quantitatively speaking.

This post was sort-of a spin-off from another post on the idea of a distinction between risk and Knightian uncertainty, which I've now posted here. So it's indeed related. But I basically reject the risk-uncertainty distinction in that post (more specifically, I see there as being a continuum, rather than a binary, categorical distinction). So this post is sort-of like me trying to challenge a current, related belief of mine by trying to see how subjective probabilities could be arrived at, and made sense of, in a particularly challenging case. (And then seeing whether people can poke holes in my thinking.)

(I've now edited the post to make it clear that this came from and is related to my thinking on Knightian uncertainty.)

in the most extreme case, something that, by its very nature, would never leave any evidence of its truth or falsity.

In Buddhism this comes from fundamental ignorance. Bundling together incoherent concepts due to not perceiving their actual structure. The parable of the coins is used to illustrate that the child and the normal person both have different types of ignorance about coins (currency) and only the money changer has a correct causal view.

As, basically, an atheist, my response to the question 'Is there an all-powerful god?' is to ask: is that question actually meaningful? Is it akin to asking, 'is there an invisible pink unicorn?', or 'have you stopped beating your wife yet?'. To whit, a mu situation https://en.wikipedia.org/wiki/Mu_(negative) .

There are a lot of different types of question, and probabilities don't seem to mean the same thing across them. Sometimes those questions are based on fuzzy semantics that require interpretation, and may not necessarily correspond to a possible state of affairs.

The possibility of a god existing doesn't equate, to me, to seeing if a possible thing exists or not, but rather whether the set of concepts are in any way possible. This is a question about the very nature of reality, and I'm pretty sure that reality is weird enough that the question falls far short of having any real meaning.

Your comment made me realise that I skipped over the objection that the questions are too ambiguous to be worth engaging with. I've now added a paragraph to fix that:

To me, and presumably most LessWrong readers, the most obvious response to these questions is to dissolve them, or to at least try to pin the questioner down on definitions. And I do think that's very reasonable. But in this post I want to put my (current) belief that "we can always assign probabilities to propositions (or at least use something like an uninformative prior)" to a particularly challenging test, so from here on I'll assume we've somehow arrived at a satisfactorily precise understanding of what the question is actually meant to mean.

I think the reason why I initially skipped over that without noticing I'd done so was that:

  • this post was essentially prompted by the post from Chris Smith with the "Kyle the atheist" example
  • Smith writes in a footnote "For the benefit of the doubt, let’s assume everyone you ask is intelligent, has a decent understanding of probability, and more or less agrees about what constitutes an all-powerful god."
  • I wanted to explore whether the idea of it always being possible to assign probabilities could stand up to that particularly challenging case, without us having to lean on the (very reasonable) strategy of debating the meaning of the question. I.e., I wanted to see if, if we did agree of the definitions, we could still come to meaningful probabilities on that sort of question (and if so, how).

But I realise now that it might seem weird to readers that I neglected to mention the ambiguity of the questions, so I'm glad your comment brought that to my attention.

There are a lot of different types of question, and probabilities don't seem to mean the same thing across them.

There are definitely a lot of different types of questions. There are also definitely multiple interpretation of probability. (This post presumes a Bayesian/subjectivist interpretation of probability, but a major contender is the frequentist view.) And it's definitely possible that there are some types of questions where it's more common, empirically speaking, to use one interpretation of probability than another, and possibly where that's more useful too. But I'm not aware of it being the case that probabilities just have to mean a different thing for different types of questions. If that's roughly what you meant, could you expand on that? (That might go to the heart of the claim I'm exploring the defensibility of in this post, as I guess I'm basically arguing that we could always assign at least slightly meaningful subjective credences to any given claim.)

If instead you meant just that "a 0.001% chance of god being real" could mean either "a 0.001% chance of precisely the Judeo-Christian God being real, in very much the way that religion would expect" or "a 0.001% chance that any sort of supernatural force at all is real, even in a way no human has ever imagined at all", and that those are very different claims, then I agree.

The possibility of a god existing doesn't equate, to me, to seeing if a possible thing exists or not, but rather whether the set of concepts are in any way possible. This is a question about the very nature of reality, and I'm pretty sure that reality is weird enough that the question falls far short of having any real meaning.

I don't understand the last half of that last sentence. But as for the rest, if I'm interpreting you correctly, here's how I'd respond:

The probability of a god existing is not necessarily equal to the probability of "the set of concepts [being] in any way possible" (or we might instead say something like "it being metaphysically possible", "the question even being coherent", or similar). Instead, it's less than or equal to that probability. That is, a god can indeed only exist if the set of concepts are in any way possible, but it seems at least conceivable that the set of concepts could be conceivable and yet it still happen to be that there's no god.

And in any case, for the purposes of this post, what I'm really wondering about is not what the odds of there being a god are, but rather whether and how we can arrive at meaningful probabilities for these sorts of claims. So I'd then also ask whether and how we can arrive at a meaningful probability for the claim "It is metaphysically possible/in any way possible that there's a god" (as a separate claim to whether there is a god). And I'd argue we can, through a process similar to the one described in this post.

To sketch it briefly, we might think about previous concepts that were vaguely like this one, and whether, upon investigation, they "turned out to be metaphysically possible". We might find they never have ("yet"), but that that's not at all surprising, even if we assume that those claims are metaphysically possible, because we just wouldn't expect to have found evidence of that anyway. In which case, we might be forced to either go for way broader reference classes (like "weird-seeming claims", or "things that seemed to violate occam's razor unnecessarily"), or abandon reference class forecasting entirely, and lean 100% on inside-view type considerations (like our views on occam's razor and how well this claim fits with it) or our "gut feelings" (hopefully honed by calibration training). I think the probability we assign might be barely meaningful, but still more meaningful than nothing.