Actually updating can be harder than it seems. Hearing the same advice from other people and only really understanding it the third time (though internally you felt like you really understood the first time) seems inefficient. Having to give yourself the same advice or have the same conversation with yourself over and over again also seems pretty inefficient. Recently, I’ve had significant progress with actually causing internal shifts, and the advice.. Well, you’ve probably heard it before. But hopefully, this time you’ll really get it.

Signs you might not be actually updating.

  • You do some focusing, and you discover a problem and you talk to a part of yourself and work through it. Then a week later, you find yourself having the same conversation with that part of you.
  • You change your mind about doing something but find that suspiciously your behaviour is not changing or doesn’t change for that long.
  • Someone says “X” and you go “oh yes, X”. Then, some time passes. Then someone else says “X” and you go “ohhh yes, X, now I get it”. Then, some time passes. Then someone else says “X” and you go “ohhhh yes of course, X, now I really get it”. Etc.
  • You feel like you believe X but yet keep finding yourself behaving as though you don’t believe X.
  • You feel like your rationalisations are running so deep that you can’t seem to catch them. You're doing things that seem to make sense but then something might happen that makes you feel like your behaviour was actually driven by a 'rationalisation'. To make this clearer, imagine something happens (event Y) that changes your opinion of X. Maybe you realise that the thing 'shouldn't' be much evidence at all. You decide it will only slightly affect your opinion. You may end up with a certain conclusion by looking at old evidence and this thing Y (that you have actively decided to only weight a small amount). Then, something may then happen that makes you really believe that thing Y is small evidence. Suddenly, you are looking at the same evidence and this thing Y (weighted the same amount as you tried to weight it before), but you have a different conclusion. It's like you are looking at the evidence through a different 'lens'. It seems like you are able to exploit uncertainty around evidence to get differing conclusions depending on this 'lens'. This whole thing might make you feel large internal distrust.

Plausible hypotheses

Plausible hypothesis 1: Some things take longer to digest than other things. Maybe you just need time to actually update models.

Plausible hypothesis 2: If you change a fundamental node in your ‘belief network’, it can be hard to change patterns of behaviour and reactions. You might not believe thing X but behave like you think thing X because you are mostly working on auto-pilot and habits are hard to break out of. This is especially salient when a piece of actual behaviour is ‘far away’ from the node that has been changed (so that it seems unrelated at a glance).

Plausible hypothesis 3: A lot of the things people are trying to teach you are ‘purple knowledge’. This may mean you may just need lots of gesturing at a thing, or to develop a certain intuition before a certain thing actually makes sense.

I think it’s likely these hypotheses play at least some role in what is happening. However, in my case something else was playing a larger role.

What was going wrong for me

The hypothesis that seems right for my situation: I was not really listening to some parts of me. In an attempt to listen to all parts of me, I was doing a few things that would cause the process to fail:

  • Calling things ‘biases’. Using words like ‘anxiety’ and ‘perfectionism’ that cause me to box up a part of myself and believe blindly that it’s doing something wrong. Warning flags should arise if you realise that you believe something is wrong for reasons separate to the actual words that part of you is saying. If no matter what a part of you says, you think it’s wrong - you’re being pretty brutal to yourself.
  • Saying “I don’t know what the right answer is and should listen to all parts of me” whilst internally feeling like one side of me is obviously going to win
  • Looking at a part of myself and believing that it’s trying to help but just thinking that it’s going to be ‘silly’ because it was adapted for a different situation “which definitely doesn’t apply here”.
  • Strawmanning a part of myself. Being quick to feel like I’ve understood a part of myself and resolved the problem there. Wanting to solve the problem quickly and just assuming feelings of discomfort afterwards was just ‘residue pain’.
  • Deferring to “experts”. Person Y told me X was good and I understand their reasoning, so X must be good. The little part of me that disagrees is just being stupid.
  • Using words like ‘weird’ when talking about any disagreement I have with an ‘expert’ and feeling pressure to update quickly (encourages strawmanning a part of myself, and being quick to think I've understood where a part of me is coming from).
  • Not focusing enough throughout the process to make sure I’m in actual contact with the part of me that feels a certain way. It might be especially hard to be in touch with the parts of you when talking to an 'expert'.
  • Not paying attention to meta parts. Maybe I’m trying to decide between X and Y. I shouldn’t just be paying attention to my feelings about X and Y. It’s good and worthwhile to pay attention to why I’m finding it hard to make this decision. Or why I'm finding it confusing. There may be parts that should be involved in the discussion that are missing.
  • Just thinking really hard and not doing experiments. If you are unsure about a tradeoff or if something is true or just a nice story, experiment with it in the real world. Come up with hypotheses and test them.

The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying. I began to stop just listening to experts and going ‘what they are saying makes sense’ and started having conversations where I just entirely let the part of me that disagreed say all the reasons it disagreed and ‘fight’ the expert. I allowed that part of me to have contact with the world and this meant that part of me could learn. And it worked.

This whole post is something you’ve probably heard before - “listen to all parts of you”, “don’t write the final line”, etc. None of this stuff was new to me, and yet, it feels like a lesson I’ve just learnt. I hope you let the part of you that might think this is all wrong ‘fight’ with me. And hopefully that will cause one of us to actually update towards the truth.

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 10:33 AM

To make this clearer, imagine something happens (event Y) that changes your opinion of X. Maybe you realise that the thing 'shouldn't' be much evidence at all. You decide it will only slightly affect your opinion. You may end up with a certain conclusion by looking at old evidence and this thing Y (that you have actively decided to only weight a small amount). Then, something may then happen that makes you really believe that thing Y is small evidence. Suddenly, you are looking at the same evidence and this thing Y (weighted the same amount as you tried to weight it before), but you have a different conclusion. It's like you are looking at the evidence through a different 'lens'. It seems like you are able to exploit uncertainty around evidence to get differing conclusions depending on this 'lens'. This whole thing might make you feel large internal distrust.

Can you give an illustrative example?

A related example is my "On insecurity as a friend": I'd on some level bought into messages saying that confidence is always good and insecurity is always bad, so when I had feelings of insecurity, I treated them as a sign of irrationality to override.

What I didn't realize was that I had those feelings because of having screwed up socially in the past, and that they were now correctly warning me about things which might again have bad consequences. Just trying to walk over them meant that I was ignoring important warnings, sometimes causing things to blow up in my face. What made the feelings easier to deal with was when I started actually taking them seriously as hypotheses to consider. After that, the relevant part could tone down the intensity of the social anxieties, as shouting so loudly as to force me to withdraw socially wasn't the only way that it could make itself heard.

Along the same lines as TurnTrout, I was wondering about the abstraction versus specific situation. I am not asking that any share anything they would not be comfortable with. However, I do think abstraction from oneself in the analysis can just be another one of the protection mechanisms that can be used to allow us to appear to be making progress while while still avoiding the underlying truth driving our behaviors.

That said, I think Sara offers some very good items to consider.

Okay, this next bit is not directly related but seems implicit in the posting, and other posts I've read here. Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?

Okay, this next bit is not directly related but seems implicit in the posting, and other posts I've read here. Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?

There are some LWers who think that way, and others who don't. (Among the people who find it a useful model, AFAICT it's usually treated more as a hypothesis to consider and/or fake-framework that is sometimes useful. This sequence is a fairly comprehensive introduction)

There are also plenty of LWers who don't buy it.

Thinking in terms of internal parts is a mental model that a good portion of the LW community that's interested in self-improvements techniques can use. You need it for the Internal Double Crux technique that CFAR teaches.

Yet, it's not the only model out there. I personally rather do a version of Leverage's belief reporting that assumes I as a whole either hold a belief or don't then doing parts work if I believe that a specific belief that I can identify is the issue.

As far as abstraction goes, I think it's a key feature for self-introspection. If you are mentally entangled with the part that you are introspecting you won't see it clearly.

A lot of meditation is about reaching a mental state where you can look at your thoughts without being associated with them.

Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes?

In the extreme form you asked about:

I'm skeptical/unclear on how it would be empirically tested.

Not aiming for IIT, but seeing how it could be true:

Rather than holding all the relevant information in our minds at once, the benefit of taking time to think is not necessarily explicit focus/consideration, but unconscious/thinking about other things and then making connections, because it takes time to reload the mental workspace in order to visit all relevant areas.

EDITED: for brevity and clarity.

The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying.


Yeah this is huge. I've had some similar insights myself the last few months and I now think it's one of the most important things that people can do. Which of course requires listening to the parts of you that think the other parts are stupid or silly, as well! And the parts of you that think that thinking about yourself as having parts is weird. Etc.

My new mantra for this is: May I integrate everything that wants to be integrated, as it wants to be integrated.

The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying.


Yeah this is huge. I've had some similar insights myself the last few months and I now think it's one of the most important things that people can do. Which of course requires listening to the parts of you that think the other parts are stupid or silly, as well! And the parts of you that think that thinking about yourself as having parts is weird. Etc.

My new mantra for this is: May I integrate everything that wants to be integrated, as it wants to be integrated.

This rings really true with my own experiences; glad to see it written up so clearly!

I think that lots of meditation stuff (in particular The Mind Illuminated) is pointing at something like this. One of the goals is to train all of your subminds to pay attention to the same thing, which leads to increasing your ability to have an intention shared across subminds (which feels related to Romeo's post). Anyway, I think it's really great to have multiple different frames for approaching this kind of goal!