My questions is strongly related to this post:

Bayes for Schizophrenics: Reasoning in Delusional Disorders by Scott Alexander

My question: can you balance the absence of priors with something else? Even if it's something irrational, such as wishful thinking or faith.

I'm interested in this because I want to now how I think. I have a suspicion that maybe something's wrong with my thinking. Maybe it's not Bayesian.

Note: I struggle with the simplest math, so sorry for possible mistakes and whatnot. It's part of the reason why I can't be sure if my thinking is Bayesian or not.


Librarian or farmer?

I encountered the following example in this video "Bayes theorem, the geometry of changing beliefs" by 3Blue1Brown.

Steve is very shy and withdrawn, invariably helpful but with very little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.

Which of the following do you find more likely: “Steve is a librarian” or “Steve is a farmer”? (from Kahneman and Tversky)

The trick is that some people don't consider how many there're librarians and farmers in general (the prior knowledge). Base rate fallacy

But I'm having trouble imagining to what irrational choices or behaviour it may lead. For example, if I want to find Steve in some real place, of course I'm going to care how many there're librarians and farmers.

So, I think in some situations a prior may seem missing, but pop right back up when it's really needed.

Note: when such questions or thought experiments get criticized for the lack of details, sometimes the criticism is answered with "you have to be able to think without details". But I don't think that some situation can exist without details (even as a fantasy).

Bank teller or feminist?

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable? "Linda is a bank teller" or "Linda is a bank teller and is active in the feminist movement"?

The infamous example. A more specific idea can't be more likely. Conjunction fallacy

But it's even harder for me to imagine the consequences of this mistake in terms of actions and decisions. Not surprising, since it's a logical impossibility.

And if I want to actually find Linda, maybe I need to prioritize more detailed leads. For example, check feminist bank tellers first (maybe checking just "bank tellers" is simply impossible).

This example is not about priors, I guess, but about the use of probability in general. It may depend on your goals.

Burdensome Details

Burdensome Details (post by Eliezer Yudkowsky)

The conjunction rule of probability theory states that a conjunction (A and B) is necessarily less probable than one of the conjuncts alone (A). Adding more details to a theory may make it sound more plausible to human ears because of the representativeness heuristic, even as the story becomes normatively less probable, as burdensome details drive the probability of the conjunction down (this is known as conjunction fallacy). Any detail you add has to be pinned down by a sufficient amount of evidence; all the details you make no claim about can be summed over.

I think additional details may easily make something more probable... if you don't aim for checking those exact verbalized details.

For example, if I don't remember exact stories about my friend, I can tell "half made up" stories, hoping that they translate the character of my friend. My listener doesn't have to evaluate the probability of every story and every detail being true at the same time in conjunction, they need to evaluate the probability I describe a real person.

I can't verbalize the distilled impression of my friend, so I approach it with details I can verbalize. It's like approaching a limit in math.

So, your evaluation of a story, the way you use probability, may depend on your goal.


Simplicity, alternative ideas

If you don't have priors, you don't have to be obsessed with a crazy idea. Because there are many incompatible ideas and the craziest ideas often have larger complexity.

Scott spells it out in his post:

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried).

Against Modest Epistemology

Against Modest Epistemology by Eliezer Yudkowsky

Imagine if there were an uncountable infinity of farmers Steve. Would we believe that librarians don't even exist? Would we believe that almost everyone on Earth is just a delusional farmer Steve who thinks that they are somebody else? The chance that you're Alexander the Great would be higher than the chance that you're not a farmer Steve. Any object is almost 100% likely to be a farmer Steve.

Maybe sometimes you need to control or choose your priors to make sense of the world. Or control the usage of your priors.

I feel this is relevant to things like Doomsday argument or Pascal's mugging. Or even Raven paradox.


"Your wife is an imposter"

I want to explain how I often think about opinions and beliefs. I know that I can't approach this from the point of view of the person who has an actual full blown delusion.

In the Capgras delusion, the patient, usually a victim of brain injury but sometimes a schizophrenic, believes that one or more people close to her has been replaced by an identical imposter. For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious, but it can get very ugly: in at least one case, a patient got so upset with the deceit that he murdered the hypothesized imposter - actually his wife.

https://en.wikipedia.org/wiki/Capgras_delusion

Sometimes I work with opinions this way: I imagine believing in something and all possible outcomes of it. Then I add up my opinions about all of the outcomes I imagine.

So, if I believe that my wife is an imposter...

  • And it's not true. Then

This is an infinitely morally disgusting outcome.

  • And she's really an imposter. Then

If I can't do anything, the belief is useless.

If I can get more information, I need to get more information about other replaced people. It's (likely) not just about my wife. It's a world-wide conspiracy.

If the imposter is very good, it raises some philosophical questions (what should I feel/how should I act towards the imposter).

If the imposter has some differences with the original, I may be interested in studying those differences as a researcher.

All those outcomes are unlikely to lead to actions such as "leave her" or "treat her as a stranger". And killing is just absolutely bad.

...

So, if I were considering the imposter idea, my opinion would be a combination of those "outcome opinions" above. Like a Schrödinger's cat opinion or a chimera.

If I

  • don't think it's too bad to wrongly accuse my wife of being a duplicate.
  • don't care about other replaced people.
  • don't care about people similar to my wife.
  • don't care about new ways to look at my wife's qualities.

Then something may be wrong with my entire attitude. Maybe it means that I don't have a strong reason to be with my wife anyway.

"CIA is after you"

I recently met a patient who was, indeed, convinced the CIA were after her; of note she also had extreme anxiety to the point where her arms were constantly shaking and she was hiding under the covers of her bed. CIA pursuit is probably the best possible reason to be anxious; the only reason we don't use it more often is how few people are really pursued by the CIA (well, as far as we know). My mentor warned me not to try to argue with the patient or convince her that the CIA wasn't really after her, as (she said from long experience) it would just make her think I was in on the conspiracy. This makes sense. "The CIA is after you and your doctor is in on it" explains both anxiety and the doctor's denial of the CIA very well; "The CIA is not after you" explains only the doctor's denial of the CIA. For anyone with a pathological inability to handle Occam's Razor, the best solution to a challenge to your hypothesis is always to make your hypothesis more elaborate.

I could approach the opinion this way:

"I need to achieve a certain goal in a certain time. For that I need to have a certain lifestyle. If I constantly run away from the CIA, I won't achieve this goal. So the belief is practically useless."

If the doctor "is in on it", maybe the doctor still can help to learn to deal with the anxiety. Or help to understand the CIA's methods of deception.

I don't know how I would approach the opinion if I didn't have a strong purpose. Maybe I wouldn't care about my life enough to care about the CIA. And what about other people pursued by the CIA, why should I care only about myself?

"Strangers are your relatives and friends"

The Fregoli delusion is the opposite: here the patient thinks that random strangers she meets are actually her friends and family members in disguise.

https://en.wikipedia.org/wiki/Fregoli_delusion

If many people are just my friends in disguise, they obviously (from the empirical data) don't want to be treated as such. I can respect their desire.

And I can respect the fictional people portrayed by my friends. And hence have the ability to view them as real different people, to seriously consider this possibility.


Without beliefs

What if there's a concept that's more important than the idea of "belief"? Such concept could be more abstract, but at the same time more practical: exact beliefs are not observable, but decisions and actions are.

If you can make decisions based on evaluated outcomes of beliefs (not evaluated beliefs themselves), you can believe in everything simultaneously or have an infinitely complicated opinion which you don't need to fully calculate.

And the same idea can be applied to values or emotions.

Maybe this would be similar to the "kernel trick":

For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over pairs of data points in raw representation.

Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space.

"Other minds don't exist"

https://en.wikipedia.org/wiki/Solipsism

I imagine myself thinking "other minds exist" or "other minds don't exist". And I see 2 worst case outcomes:

  • I'm "ignoring" other people because I don't believe in them... and I'm wrong. This is super sad: I'm just walking past other people potentially condemning some of them to a life of solitude... without a particular reason.
  • I'm not ignoring other people... and I'm wrong. In this case I'm just wrong. Maybe I'm not even really wrong: other people are probably my greatest value and I experience "them" as much as I can by believing in them.

I'm choosing the 2nd "wrong".

"Belief-in-belief exists (and is bad)"

Belief-in-belief by Eliezer Yudkowsky

I imagine myself blaming someone for having a belief-in-belief. And I see 2 outcomes:

  • I'm wrong. This means I'm trying to use a fantasy about a nonexistent phenomenon to argue that opinions I dislike don't even truly "exist". This is a big negative.
  • And it gets worse in practice. On top of everything I'm wrongly saying that I know a person's mind better than they know it and also making some wrong judgement calls about the other person's behavior. This is outrageous.
  • I'm right. In this case... I get nothing, just a new label to stick on an opinion that I'm already absolutely convinced is very wrong. This is an infinitesimal positive or less.

So, for me believing in "belief-in-belief" is not worth it.

However, without presumptuous negative judgements the concept becomes way more interesting and believable.

Shame test

Sometimes I think about the "shame test":

  • I imagine holding a belief.
  • I imagine getting absolutely stomped because of it. Being left destroyed and ashamed.
  • I ask myself: could I feel parts of the same shame already on the step (1)?

The idea of the "shame test" is that sometimes there's a qualitative difference between different perspectives, not just quantitative (probability). Sometimes just imagining a different perspective is enough to see that you're wrong.


Critiques of Bayes

At the end of the day one doesn't have to believe that knowledge is represented in the human brain in a Bayesian form anyway:

bayes: a kinda-sorta master-post by Nostalgebraist

Whatever the actual knowledge representation inside our brains looks like, it doesn’t seem like it can be easily translated into the structure of “hypothesis space, logical relations, degrees of belief.”

It seems like we know implicitly about a lot of logical relations that that we are incapable of simultaneously considering. We can only apply these logical relations once we’ve started thinking about them.

If that's true, humans must have a way to reason about uncertainty without priors. Even if it's an irrational way to reason.

I also heard that maybe David Deutsch has ideas about "thinking without beliefs":

from the post "David Deutsch Eats Rationalism" by Cam Peters

And Deutsch doesn't believe in belief either.

Knowledge can exist without belief. It’s instantiated in books, computer programs, and genes. Further, knowledge may be inexplicit or unconscious. We effortlessly use grammar and laugh appropriately in social situations.

Although, I’m a little unclear if Deutsch thinks knowledge is never belief. Darwin surely believed in natural selection, right?

So... are there ideas about things that can replace priors?

Things similar to priors

I heard that gradient descent works somewhat similar to priors.

And I read that "regularization" (statistics) is similar to priors.

As I understand it, you do "regularization" when you look not only for predictive power of the model, but also for some other properties. (But I'm not a math guy.) Another quote from Nostalgebraist's post:

There is a famous “tradeoff” between bias and variance, because the more complicated you let your models get, the more freedom they have to fit the noise. But reality is complicated, so you don’t want to just restrict yourself to something super simple like linear models. What do you do?

A typical answer is “regularization,” which starts out with an algorithm that can produce really complex models, and then adds in a penalty for complexity alongside the usual penalty for bad data fits. So your algorithm “spends points” like an RPG character: if adding complexity helps fit the data, it can afford to spend some complexity points on it, but otherwise it’ll default to the less complex one.

This point has been made by many people, but Shalizi made it well in the very same post I linked earlier: Bayesian conditionalization is formally identical to a regularized version of maximum likelihood inference, where the prior is the regularizing part. That is, rather than just choosing the hypothesis that best fits the data, full stop, you mix together “how well does this fit the data” with “how much did I believe this before.”

New Comment