This is a special post for quick takes by Connor_Flexman. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
28 comments, sorted by Click to highlight new comments since:

Shut up and multiply is only about very comparable things (hence the example with differing numbers of birds). Obviously very important to make Pareto improvements of the form "hold everything constant, get more of good thing X".

The main failure mode is applying it between un-like things, and accidentally making bad tradeoffs. For example, working very hard to make money to give away but then stagnating, because it turns out doing things you like is actually very important to personal growth and personal growth is very important to achieving your goals in the future. In general, making Kaldor-Hicks improvements that turn out not to be Kaldor-Hicks improvements because things had secret benefits.

Shut up and divide helps alleviate people getting mind-controlled by Very Large Numbers and devoting all of their time to other people (of whom there are many more of than yourself), but... it smuggles in an insidious and terrible type error (while not correcting the core issue).

"Shutting up and letting an explicit deliberative practice decide the answer" is not about getting your emotions to work more reasonably, as is said in the division post, it's about making decisions where your emotions don't have the proper sense of scale. You're not supposed to walk around actually feeling the desire to save a billion birds at an intensity a billion times stronger than the desire to save one bird. The deliberative analysis isn't about aligning your care, it's about making decisions where your care system is having troubles. To apply it to "how much you care about a random person", as he did, is not the place your care system has troubles! (Of course, plausibly Wei Dai was not actually making this mistake, it's always hard to ensure your ideas aren't being misinterpreted. But it really sounds like he was.)

But still, directly, why do I think you should still care about a random bird, when there are so many more important things to do? Why not overwrite your initial caring system with something that makes more sense, and use the fact that you don't care greatly about the sum total of birds to determine you don't care greatly about a single bird? Because I desperately want people to protect their learned local gradient.

The initial problem of missing secret benefits to things is well-known in various guises similar to Chesterton's Fence. But Chesterton's Fence isn't very constructive—it just tells you to tread carefully. I think the type of process you should be running to actively identify fake Kaldor-Hicks improvements is protecting the local gradient. If your mind has learned that reading fiction books is more important than going above and beyond on work, even if that supposedly saves lives—preserve this gradient! If your mind has learned that saving a single bird is more important than getting to your appointment on time, even if that supposedly saves lives—preserve this gradient!

The whole point of shutting up to multiply is that your brain is very bad outside a certain regime, but everyone knows your brain is the most profound analysis tool ever created inside its wheelhouse. And local gradients are its wheelhouse. In fact, "using deliberate analysis to decide which of multiple very different goals should be pursued" is the kind of tool that is great in its own regime, namely optimizing quantities in well-known formal situations, but is itself very bad outside of this regime! (Cf Communism, Goodhart, the failures of high modernism, etc.) To make hard tradeoffs in your daily life, you want to use the analogous principle "shut up and intuit" or "shut up and listen to your desires" or whatever provokes in you the mindset of using your mind's experience. That's the place you'd expect to get the note of discord, that says "wait, I think there's actually something pretty bad about working constantly and giving all my money away—what is that about?"

This shortform post makes me wish LW supported bookmarking/sequencing it internally. Absent that, there's bookmarking the shortform, but this comment in particular seems like a step towards something that the sequences seem like they're missing.

We might implement bookmarking shortform posts (though it's a bit of work and not sure we'd get to it soon). But, meanwhile, I'd support this post just getting turned into a top-level post.

To make hard tradeoffs in your daily life, you want to use the analogous principle "shut up and intuit" or "shut up and listen to your desires" or whatever provokes in you the mindset of using your mind's experience

I enjoyed this and it clarified one thing for me. One question I have about this is shouldn't you also listen to the part of your cognition that's like "You're wasting time reading too many fiction books" and "You could donate more of your money to charity?"

I think maybe what you're pointing at here is to not immediately make "obvious improvements" but to instead inquire into your own intuitions and look for an appropriately aligned stace.

Good point, and you're right that that's the complex part. It's very hard to say the criterion, but it's the difference between "I feel like I should donate more of my money to charity because of this argument" vs "I should donate more of my money to charity, which I realized because of this argument".

The deliberative process is like your heuristic in A*, and you definitely feel some strong push toward that option, but the S1 pathfinder hasn't approved of the thing until (something happens), which I'm going to call "realizing it". I think this meshes with our other uses of the phrase. Cf someone who "realizes there is nothing left for them here", or "realizes that person is actually good", or something—they aren't going to have any sort of akrasia on acting from that new belief.

I’ve understood episteme, techne, and metis for awhile, and the vital importance of each, but I’ve been missing this understanding of gnosis. I now think I've been bouncing off the implication that’s bundled into the idea of gnosis: that knowledge of spiritual mysteries is universal, or won’t be overturned later, or is “correct”. But I think that’s a wrong way to look at things.

For example, consider “life philosophies”. People put a ton of energy thinking about existentialism and what to do about the fact that we’re all going to die. The important thing people get from it isn’t some sort of episteme; nor techne; nor metis. They process it, learn to cope, learn how their values interact with the world—and the big insights here feel spiritual.

Likewise, with love. People develop philosophies around love that are clearly not built on the other 3 kinds of knowledge: they often contain things like “my heart yearns for that kind of thing”. The statement “my heart yearns for that kind of thing” is episteme, the decisionless following of the heart is techne, the fact that you should follow your heart is metis, but finding that your heart yearns for the thing is gnosis. It was a spiritual mystery what your heart yearned for, and you figured it out, and to find one of these feels just as spiritual as they say.

I can sort of see how meditation can give rise to these, cutting yourself off from synthetic logical direction and just allowing natural internal annealing to propagate all sorts of updates about your deep values and how to cope with the nature of reality. I can sort of see why people go to “find themselves spiritually” by traveling, letting new values come out and the standard constraints get loosened, and the resulting depth growing spiritual knowledge. I can sort of see why drugs, dancing, and sexuality were often used in pagan religious ceremonies meant to cause a revealing of the spirit and an estuary where deep values intermingled.

But all these spiritual insights are about how your mind wants to work, not about episteme-like "correct" universal knowledge. It's not universal, even if they look similar from mind to mind. They definitely get overturned later, at least in the limited sense that GR overturned Newton. And "correctness" doesn't really apply to them, because they're about the map being more like the map wants, not about map v reality.

This is the first time that I feel like I've actually seen someone use the four knowledge terms ("episteme", "techne", "metic" and "gnosis") in a way that felt like it created a coherent picture. I got used to thinking about "metis" as distinct from other types of knowledge, but this has also helped me place where "techne", "episteme" and "gnosis" fall.

I’ve updated toward significantly less risk from COVID than I expected a week ago, for people aged 25-30:

Old numbers:

  • .2% mortality = 1 expected month
  • 1.5% long-term side effect = 2 expected months
  • 1% mortality from lack of ventilators = 5 expected months (10% hospitalized, 10% of those on ventilators, n_vent is .03% of pop and older people need more ventilators so we run out at ~1% of population infected, and no shutdown measures had been taken approximately 2 days before we hit 1% of population by my estimates)
  • Total = 8 expected months

New numbers (from ICL report; someone please tell me if I’m misunderstanding their context):

  • .05% mortality = .3 expected months
  • .7% long-term side effect = 1 expected month
  • .1% mortality from lack of ventilators = .5 expected months (2% hospitalized, 5% of those on ventilators)
  • Total = 2 expected months (maybe less if we never go above 2% of population infected in most places due to new shutdown measures, and ventilators are sufficiently mobile to move to crisis zones)

For older people, the numbers changed less (about a factor of 2), e.g. a healthyish parent in their 60s went from about 3 years of expected life lost to roughly 1.5 years.

In general, I feel fine with this outcome—the old numbers I was using were more an average than a median, so the most likely update was downward. I also adjusted the mortality rates downward somewhat, but I didn’t know how far, and the final update was further than I should have guessed. Lastly, a week ago the response was so abysmal that I think it was correct to have a factor of two worse expectation than I do now, just from failures to contain, treat, etc.

The one thing I wish I had done differently was weight South Korea’s numbers a little higher a little earlier, since priors were already on the side of lots of undiscovered/mild cases. I thought Wuhan’s testing was relatively good and things were partially adjusted for the missing cases, but I went like two weeks between looking at South Korea and that caused my numbers to lag somewhat. I think I could have been estimating 4-6 weeks as of a week ago, if I had flagged that better to come back to.

That mortality estimate seems a bit too low to me, Sarah Constantin estimated 0.1%-0.2% for someone below 40, so the lower end of that seems right to me.

The 0.1% mortality from lack of ventilators also seems too low to me. We might be a bit in luck in Bay Area since I think we are less likely to get it, but you should still expect that if you do get the virus, you get it when everyone else gets it, which means likely overwhelmed hospitals, and 0.1% mortality seems too low for that case. Sarah Constantin also estimates a total fatality rate of 1%-2% when hospitals are overwhelmed. 

I don't trust the numbers in the ICL report particularly much, since they are reliably lower than from other reputable sources, and I don't actually see how they got numbers that were that low. I am currently reading through the study that they are basing their estimates on.

Remember that just like there are a lot of levels to any skill, there are a lot of levels to any unblocking!

It feels to me like perhaps both parties are making a mistake when one person (the discoverer) says, "I finally figured out [how to be emotionally liberated or something]!" and the skeptic is like "whatever, they'll just come back in a few months and say they figured out even more about being emotionally liberated, what a pointless hamster wheel." (Yes, often people are unskilled at this type of thing and the first insight doesn't stick, but I'm talking about the times when it does.)

In these cases, the discoverer will *still find higher levels of this* later on! It isn't that they've discovered the True Truth about [emotional liberation], they've just made a leap forward that resolves lots of their known issues. So even if the skeptic is right that they'll discover another thing in the future that sounds very similar, that doesn't actually invalidate their present insight.

And for the discoverer, often it is seductive to think you've finally solved that domain. Oftentimes most or all of your present issues there feel resolved! But that's because you triangulate from the most pressing issues. In the future, you'll find other cracks in your reality, and need to figure out superficially similar but slightly skewed domains—and thinking you've permanently solved a complicated domain will only hamper this process. But that doesn't mean your insight isn't exactly as good as you think it is.

Sometimes people are explaining a mental move, and give some advice on where/how it should feel in a spatial metaphor. For example, they say "if you're doing this right, it should feel like the concept is above your head and you're reaching toward it."

I have historically had trouble working well with advice like this, and I don't often see it working well for other people. But I think the solution is that for most people, the spatial or feeling advice is best used as an intermediate/terminal checksum, not as something that is constructive.

For example, if you try to imagine feeling their feeling, and then seeing what you could do differently to get there, this will usually not work (if it does work fine, carry on, this isn't meant for you). The best way for most people to use advice like this is to just notice your spatial feeling is much different than theirs, be reminded that you definitely aren't doing the same thing as them, and be motivated to go back and try to understand all the pieces better. You're missing some part of the move or context that is generating their spatial intuition, and you want to investigate the upstream generators, not their downstream spatial feeling itself. (Again, this isn't to say you can't learn tricks for making the spatial intuition constructive, just don't think this is expected of you in the moment.)

For explainers of mental moves, this model is also useful to remember. Mental moves that accomplish similar goals in different people will by default involve significantly different moving parts in their minds and microstrategies to get there. If you are going to explain spatial intuitions (that most people can't work easily with), you probably want to do one of the following:

1) make sure they are great at working with spatial intuitions

2) make sure they know it's primarily a checksum, not an instruction

3) break down which parts generate that spatial intuition in yourself, so if they don't have it then you can help guide them toward the proper generators

4) figure out your own better method of helping them work with it that I haven't discovered

5) remember the goal is not to describe your experience as you experience it, but to teach them the skill, and just don't bring up the spatial intuition as if they should be guided by that right now

I like NLP's explanation of this. Submodalities like position and distance aren't common between people, but people DO tend to have similar representations with similar submodalities. I tend to be very kinesthetic with proprioceptive intuitions, but if instead I can say "do this task, wait for some sense, then tell me how you represent that", I can have them work with THEIR representation instead of mine.

This seemed to work decently well for teaching people strategies for overcoming Akrasia/procrastination, and I suspect with some tweaking it can be even more consistent.

Remember remember remember, costly signaling is supposed to be about cost-to-fake, not cost-burnt-to-signal. It is not like Bitcoin. If you own an original Picasso, it is costless to show that you own it, but very costly for someone to fake owning it (have to commission an elaborate copy).

“Virtue signaling” should be thought of with this in mind. If you or someone else is frowning upon a virtue signal, that’s not because of the inherent structure of signaling. It means either it’s a corrupted signal, they’re being annoying with their signal, or it’s not a signal to begin with. For example, if someone can post a bunch of tweets about Latest Crisis costlessly, that’s not really a costly signal to begin with. If someone volunteers for many hours at soup kitchens to be a politician even though they hate it, that’s a corrupted signal. If you casually drop all your volunteering accolades in conversation apropos of nothing, that’s a real signal but really annoying.

In many ways this structure mirrors force projection! Cf Luttwak's Grand Strategy of the Roman Empire. In the same way that good force projection doesn’t require costly forces to be applied, good signaling doesn’t require cost to be burnt on a signal. The adept will signal perfectly fine through various proofs provided, without breaking social norms or splurging resources.

I think most of the opprobrium about virtue signaling (or at least the part I object to) is not about costliness of signal.  It's about the goodhearting of virtue.  Caring about those topics and most public actions related to them is NOT a virtue in my book.  This is annoying regardless of whether someone actually cares, or they expend effort to pretend to care.

I at least partially agree with this. I'm less interested in virtue signaling per se than I am in using it as a brief exploration to highlight a common misconception about how signaling works. Plausibly virtue signaling isn't the clearest example of this, but I do think it's a pretty good case of the broader point: people tend to talk about signals mostly when they are deficient in various ways, but then that tarnish rubs off onto all signaling universally. I think it's really important that signals are extremely good in general, except ones that are dumb because they're costly to implement or goodharted or what-have-you.  This really does not come through when people talk about signaling. 

Ah, I might use education vs IQ as an example - education is easier for smarter people to acquire.  Of course, a lot of signaling examples are INTENTIONALLY focused on cost - the classic peacock tail is about signaling that the male is fit enough to spend that much energy on it's tail.  This is a perfect signal - the cost IS the signal, and the ability to undertake that cost is the value being signaled.

Another sad regulation-induced (and likely public health comms-reinforced) inadequacy: we don't report Ct values on PCR tests. Ct value stands for cycle threshold, which means how many cycles a PCR amp has to do before the virus is detected. So, it directly measures viral load. But it isn't reported to us on tests for some reason: here's an example document saying why it shouldn't be reported to patients or used to help them forecast progression. Imo a very bad and unnecessary decision.

Basically the reasoning is that given the current tests highly variable Ct values get produced that don't do a good job at directly measuring viral load.

If that's the case and people like you think it would do that, not giving you the value to avoid misleading you seems reasonable. 

Yes, they've made it very clear that that's the reasoning, and I am saying I disagree.

A) I still think they are not correct (long evidence below)
B) Ct values are clearly somewhat useful, and the question is how much—and I do not think the public health comms apparatus should stifle somewhat-useful medical information reaching patients or doctors just because I might be misled. That's just way too paternalistic.

As to why I think they're wrong, I'll cross-post from my fb thread against the specific pdf linked in op, though all other arguments seem isomorphic afaict. If you don't trust my reasoning but want the reasoning of medical professionals, skip to the bottom.

Basically, the pdf just highlights a bunch of ways that Ct values aren’t perfectly precise and reliable. It says nothing about the relative size of the error bars and the signal, and whether the error bars can drown it out—and, they can’t. To use a very exaggerated metaphor, it’s like the people saying we need to pull J&J because it’s not “perfectly safe” without at all looking at the relative cost/benefit.

So, they give a laundry list of factors that will produce variability in Ct values for different measurements of the same sample. But toward the end of the the doc, they proclaim that these sources of variability change the result by up to 2-3 logs, as if this is a damning argument against reporting them. The scale of Ct values is 10 logs. Hospitalized patients vary by 5 logs. That’s so much more signal than their claimed noise! So their one real critique falls very flat.

However, they do understate the noise significantly, so we can strengthen their argument. Within-patient variability is already like 2-3 logs, as you can see from data here for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7151491/. So variability of viral loads across different patients, different collection methods, and different analysis methods will have more like 4-6 logs of variation. That’s the stronger argument.

But even this is ultimately too weak. 

Most of the variation is on the negative side: there are lots more ways to fail to get a good sample than there are to accidentally find the virus is more concentrated than in reality. So, low Ct values indicating high viral load are still very good signals! I don’t know the exact numbers here because they won’t report them many places, but your reasoning would hypothetically go: If you get a Ct value of under 20, you better start canceling meetings and preparing for a possible hospital visit. If you get a Ct value of 38, maybe it’ll end up getting much worse, or maybe not. Not much information there. This is simple reasoning—doctors do it all the time with other tests with high falsity rates, saying “if you test positive on this you probably have X, but getting a negative doesn’t rule it out.”

And aside from this asymmetry, just the correlation is also really useful! I am not the first person to say this: googling turns up a bunch of instances of medical professionals saying similar things:

Saving this example for later, when everyone claims the CDC and other "experts" didn't act incredibly stupidly about boosters:

"Americans who have been fully vaccinated do not need a booster shot at this time," according to a joint statement from the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). "We continue to review any new data as it becomes available and will keep the public informed."

The statement came after Pfizer-BioNTech announced plans to seek authorization for a booster shot for its COVID-19 vaccine. Though all the vaccine manufacturers have been studying booster shots just in case they would be needed, Pfizer's decision to seek authorization so soon took experts by surprise, and many of them criticized the announcement, The New York Times reported.

From LiveScience.

(Note that we already have had a wealth of data for weeks or months that the vaccines are only ~60% "effective" against delta. Saying the statement that the vaccinated do not "need" a booster is Not Even Wrong—just say that they are quite protected against mortality, but not that protected from contracting it, so they should be allowed to get a booster if possible. A two- or three-pronged claim is NOT too complicated for people! Stop talking to people like they are 5, and they might trust you a little more!)

Mimesis has re-revealed its awesome and godly power to me over the last few months. Not Girardian mimesis, but hominid mimesis. Best way to do almost anything is to literally copy others, especially the best people but really the triangulation between any few people will do. Don’t know how to write an email? Copy it from an email you received. Don't know how to do any chore, cooking, dance, etc? Just look it up on youtube. This is a long ways from Connor of 2018, who fastidiously avoided watching youtube videos of poi so I could explore it all on my own for months.

Mimesis has a bad rap in my local culture. But, huge postulate: mimesis is ONLY bad when coupled with such tight need for approval that it is a hard constraint on what you can do. That's the combination that results in whole segments of society that can't innovate, can't fix basic problems, general cheems mindset. In our scene of non-conformists, there is essentially no downside, I postulate!

You can make arguments like “thinking things through for yourself first can help avoid anchoring”, or “you can genuinely learn better if you take a first stab yourself and then see the diff”. Sure, but I think these are the exception that proves the rule. Holding off on mimesis is very useful in a few contexts, and all the time for a few occupations; for most people, 99% of stuff is best to do starting from the shoulders of giants. If you like thinking for yourself, trust me that you will do that just the same while cooking from a recipe compared to trying to derive it yourself. If I had just started learning poi as the experts do it, I would have much more quickly gotten to a place where  creative energy and first principles yielded interesting new results, rather than just new results. 

Mimesis has a bad rap in my local culture.

Not sure where that is but, pure imitation, and no innovation does seem to have downsides yes. (Aside from incomplete copying, can mean stuff degrades over time.) That doesn't mean it isn't a useful tool. There are other tools as well. Knowledge isn't always explicit, and doesn't have to be, but there are benefits to making things more explicit if that knowledge is used.


99% of stuff is best to do starting from the shoulders of giants.

Yeah...unless you acquire all the (explicit) knowledge yourself - approaches that are not imitation will still draw on other's knowledge and work. (That doesn't mean all of it will be right, easy to use, or useful.)

Reminder that US is crossing 50% BA.2 in the next few days, CA and NY have started to uptick, so probably in 4 weeks it will be a serious wave peaking in like 6-8ish weeks. Plan accordingly!

(So ~4 weeks where things are fineish, then ~7 weeks where rates are higher, then 4 weeks to come back down. I.e. plan for May and June to have lots of COVID, and potential restrictions to continue into July.)

Bad metaphor mode of conversation:

Instead of saying true things or trying to make a point that meaningfully advances the conversation, you can just make points that aren't that good. Apply a bad analogy if you want.

I think this is surprisingly important around the watercooler for especially generative people, and something I've been missing and wrong about when I vaguely look down on bad metaphors. Obviously you should not use this type of conversation in getting-things-done convos or many-person convos, but it seems tailored for few-person brainstorming/decompression/speculation with open time to fill. I've never really understood why people make lots of advancements while drinking beer after work (and sorry to those who I gaslit) but I get it more now.

I know things like “brainstorming” or “babble” are supposed to already fill this gap, but they didn't do it for me. I think it's because there’s an important difference between saying a ton of things that very loosely fit vs taking one thing that sort of fits, saying about it what seems natural, then taking another one and doing the same, etc. That's much more a "standard cognitive process" than doing explicit lateral thinking, and I think hewing closer to your brain's standard is a big part of what getting a beer is supposed to do for work.

A Few Lessons from Dominic Cummings on Politics

Barbell model of voters (or "delusion of the centre"), where many in the electorate are far to the left of politicians on white collar crime and higher taxes on the rich but far to the right of politicians on violent crime, anti-terrorism, and immigration. 

You want to be empirical in a way almost all in politics aren't: run tons of focus groups and really listen to how your voters think, not just what policies they want.

Use a best-in-class data model. Polls naturally swing all over, much polling is bad; if you use these, make them Bayesian and get great people who really know what they're doing to figure them out. Then use these models to focus relentlessly on whatever has the largest effect size, which is swing voters. [Some other tricks here that seem worth not being as explicit about.]

Don't be patronizing, do have integrity—very hard in politics. 

Stay on message. Bill Clinton's campaign had 3 talking points, each phrased to maximize punch. "It's the economy, stupid", "read my lips", and another that I forget. Carville was incredible at focusing relentlessly on turning every interview question into a response on one of these three. People won't care about most of the stuff you could talk about, and you can't optimize everything, so just choose the few best messages that are most powerful to people and drive everything back to them. Watch The War Room about the Clinton campaign if you haven't yet.

[-]TAG30

"Read my lips, no new taxes" was G.H.W. Bush.

Clinton's campaign was against Bush, so they were throwing these words back at him.