Imagine being a student of physics, and coming across a blog post proposing a list of guidelines for "physicist motors"—motor designs informed by the knowledge of physicists, unlike ordinary motors.

Even if most of the things on the list seemed like sensible advice to keep in mind when designing a motor, the framing would seem very odd. The laws of physics describe how energy can be converted into work. To the extent that any motor accomplishes anything, it happens within the laws of physics. There are theoretical ideals describing how motors need to work in principle, like the Carnot engine, but you can't actually build an ideal Carnot engine; real-world electric motors or diesel motors or jet engines all have their own idiosyncratic lore depending on the application and the materials at hand; an engineer who worked on one, might not the be best person to work on another. You might appeal to principles of physics to explain why some particular motor is inefficient or poorly-designed, but you would not speak of physicist motors as if that were a distinct category of thing—and if someone did, you might quietly begin to doubt how much they really knew about physics.

As a student of rationality, I feel the same way about guidelines for "rationalist discourse." The laws of probability and decision theory describe how information can be converted into optimization power. To the extent that any discourse accomplishes anything, it happens within the laws of rationality.

Rob Bensinger proposes "Elements of Rationalist Discourse" as a companion to Duncan Sabien's earlier "Basics of Rationalist Discourse". Most of the things on both lists are, indeed, sensible advice that one might do well to keep in mind when arguing with people, but as Bensinger notes, "Probably this new version also won't match 'the basics' as other people perceive them."

But there's a reason for that: a list of guidelines has the wrong type signature for being "the basics". The actual basics are the principles of rationality one would appeal to explain which guidelines are a good idea: principles like how evidence is the systematic correlation between possible states of your observations and possible states of reality, how you need evidence to locate the correct hypothesis in the space of possibilities, how the quality of your conclusion can only be improved by arguments that have the power to change that conclusion.

Contemplating these basics, it should be clear that there's just not going to be anything like a unique style of "rationalist discourse", any more than there is a unique "physicist motor." There are theoretical ideals describing how discourse needs to work in principle, like Bayesian reasoners with common priors exchanging probability estimates, but you can't actually build an ideal Bayesian reasoner. Rather, different discourse algorithms (the collective analogue of "cognitive algorithm") leverage the laws of rationality to convert information into optimization in somewhat different ways, depending on the application and the population of interlocutors at hand, much as electric motors and jet engines both leverage the laws of physics to convert energy into work without being identical to each other, and with each requiring their own engineering sub-specialty to design.

Or to use another classic metaphor, there's also just not going to be a unique martial art. Boxing and karate and ju-jitsu all have their own idiosyncratic lore adapted to different combat circumstances, and a master of one would easily defeat a novice of the other. One might appeal to the laws of physics and the properties of the human body to explain why some particular martial arts school was not teaching their students to fight effectively. But if some particular karate master were to brand their own lessons as the "basics" or "elements" of "martialist fighting", you might quietly begin to doubt how much actual fighting they had done: either all fighting is "martialist" fighting, or "martialist" fighting isn't actually necessary for beating someone up.

One historically important form of discourse algorithm is debate, and its close variant the adversarial court system. It works by separating interlocutors into two groups: one that searches for arguments in favor of a belief, and another that searches for arguments against the belief. Then anyone listening to the debate can consider all the arguments to help them decide whether or not to adopt the belief. (In the court variant of debate, a designated "judge" or "jury" announces a "verdict" for or against the belief, which is added to the court's shared map, where it can be referred to in subsequent debates, or "cases.")

The enduring success and legacy of the debate algorithm can be attributed to how it circumvents a critical design flaw in individual human reasoning, the tendency to "rationalize"—to preferentially search for new arguments for an already-determined conclusion.

(At least, "design flaw" is one way of looking at it—a more complete discussion would consider how individual human reasoning capabilities co-evolved with the debate algorithm—and, as I'll briefly discuss later, this "bug" for the purposes of reasoning is actually a "feature" for the purposes of deception.)

As a consequence of rationalization, once a conclusion has been reached, even prematurely, further invocations of the biased argument-search process are likely to further entrench the conclusion, even when strong counterarguments exist (in regions of argument-space neglected by the biased search). The debate algorithm solves this sticky-conclusion bug by distributing a search for arguments and counterarguments among multiple humans, ironing out falsehoods by pitting two biased search processes against each other. (For readers more familiar with artificial than human intelligence, generative adversarial networks work on a similar principle.)

For all its successes, the debate algorithm also suffers from many glaring flaws. For one example, the benefits of improved conclusions mostly accrue to third parties who haven't already entrenched on a conclusion; debate participants themselves are rarely seen changing their minds. For another, just the choice of what position to debate has a distortionary effect even on the audience; if it takes more bits to locate a hypothesis for consideration than to convincingly confirm or refute it, then most of the relevant cognition has already happened by the time people are arguing for or against it. Debate is also inefficient: for example, if the "defense" in the court variant happens to find evidence or arguments that would benefit the "prosecution", the defense has no incentive to report it to the court, and there's no guarantee that the prosecution will independently find it themselves.

Really, the whole idea is so galaxy-brained that it's amazing it works at all. There's only one reality, so correct information-processing should result in everyone agreeing on the best, most-informed belief-state. This is formalized in Aumann's famous agreement theorem, but even without studying the proofs, the result is obvious. A generalization to a more realistic setting without instantaneous communication gives the result that disagreements should be unpredictable: after Bob the Bayesian tells Carol the Coherent Reasoner his belief, Bob's expectation of the difference between his belief and Carol's new belief should be zero. (That is, Carol might still disagree, but Bob shouldn't be able to predict whether it's in the same direction as before, or whether Carol now holds a more extreme position on what adherents to the debate algorithm would call "Bob's side.")

That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements? Isn't that crazy?

Yes. It is crazy. One might hope to do better by developing some sort of training or discipline that would allow discussions between practitioners of such "rational arts" to depart from the harnessed insanity of the debate algorithm with its stubbornly stable "sides", and instead mirror the side-less Bayesian ideal, the free flow of all available evidence channeling interlocutors to an unknown destination.

Back in late 'aughts, an attempt to articulate what such a discipline might look like was published on a blog called Overcoming Bias. (You probably haven't heard of it.) It's been well over a decade since then. How is that going?

Eliezer Yudkowsky laments:

In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to—they did not really get Bayesianism as thermodynamics, say, they did not become able to see Bayesian structures any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts.

"A vague spirit of how to reason and argue" seems like an apt description of what "Basics of Rationalist Discourse" and "Elements of Rationalist Discourse" are attempting to codify—but with no explicit instruction on which guidelines arise from deep object-level principles of normative reasoning, and which from mere taste, politeness, or adaptation to local circumstances, it's unclear whether students of 2020s-era "rationalism" are poised to significantly outperform the traditional debate algorithm—and it seems alarmingly possible to do worse, if the collaborative aspects of modern "rationalist" discourse allow participants to introduce errors that a designated adversary under the debate algorithm would have been incentivized to correct, and most "rationalist" practitioners don't have a deep theoretical understanding of why debate works as well as it does.

Looking at Bensinger's "Elements", there's a clear-enough connection between the first eight points (plus three sub-points) and the laws of normative reasoning. Truth-Seeking, Non-Deception, and Reality-Minding, trivial. Non-Violence, because violence doesn't distinguish between truth and falsehood. Localizability, in that I can affirm the validity of an argument that A would imply B, while simultaneously denying A. Alternative-Minding, because decisionmaking under uncertainty requires living in many possible worlds. And so on. (Lawful justifications for the elements of Reducibility and Purpose-Minding left as an exercise to the reader.)

But then we get this:

  1. Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.

I can believe that these are good ideas for having a pleasant conversation. But separately from whether "Err on the side of forgiveness over punishment" is a good idea, it's hard to see how it belongs on the same list as things like "Try not to 'win' arguments using [...] tools that work similarly well whether you're right or wrong" and "[A]sk yourself what Bayesian evidence you have that you're not in those alternative worlds".

The difference is this. If your discourse algorithm lets people "win" arguments with tools that work equally well whether they're right or wrong, then your discourse gets the wrong answer (unless, by coincidence, the people who are best at winning are also the best at getting the right answer). If the interlocutors in your discourse don't ask themselves what Bayesian evidence they have that they're not in alternative worlds, then your discourse gets the wrong answer (if you happen to live in an alternative world).

If your discourse algorithm errs on the side of sticks over carrots (perhaps, emphasizing punishing others' bad epistemic conduct more than most people naturally do), then ... what? How, specifically, are rough-and-tumble spaces less "rational", more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?

I'm not saying that goodwill is bad, particularly. I totally believe that goodwill is a necessary part of many discourse algorithms that produce maps that reflect the territory, much like how kicking is a necessary part of many martial arts (but not boxing). It just seems like a bizarre thing to put in a list of guidelines for "rationalist discourse".

It's as if guidelines for designing "physicist motors" had a point saying, "Use more pistons than most engineers naturally do." It's not that pistons are bad, particularly. Lots of engine designs use pistons! It's just, the pistons are there specifically to convert force from expanding gas into rotational motion. I'm pretty pessimistic about the value of attempts to teach junior engineers to mimic the surface features of successful engines without teaching them how engines work, even if the former seems easier.

The example given for "[r]eward[ing] others' good epistemic conduct" is "updating". If your list of "Elements of Rationalist Discourse" is just trying to apply a toolbox of directional nudges to improve the median political discussion on social media (where everyone is yelling and no one is thinking), then sure, directionally nudging people to directionally nudge people to look like they're updating probably is a directional improvement. It still seems awfully unambitious, compared to trying to teach the criteria by which we can tell it's an improvement. In some contexts (in-person interactions with someone I like or respect), I think I have the opposite problem, of being disposed to agree with the person I'm currently talking to, in a way that shortcuts the slow work of grappling with their arguments and doesn't stick after I'm not talking to them anymore; I look as if I'm "updating", but I haven't actually learned. Someone who thought "rationalist discourse" entailed "[r]eward[ing] others' good epistemic conduct (e.g., updating) more than most people naturally do" and sought to act on me accordingly would be making that problem worse.

A footnote on the "Goodwill" element elaborates:

Note that this doesn't require assuming everyone you talk to is honest or has good intentions.

It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side as the people who disagree with you".

But this seems to contradict the element of Non-Deception. If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?

Other intellectual communities have a name for the behavior of role-playing being on the same side as people you disagree with: they call it "concern trolling", and they think it's a bad thing. Why is that? Are they just less rational than "us", the "rationalists"?

Here's what I think is going on. There's another aspect to the historical dominance of the debate algorithm. The tendency to rationalize new arguments for a fixed conclusion is only a bug if one's goal is to improve the conclusion. If the fixed conclusion was adopted for other reasons—notably, because one would benefit from other people believing it—then generating new arguments might help persuade those others. If persuading others is the real goal, then rationalization is not irrational; it's just dishonest. (And if one's concept of "honesty" is limited to not consciously making false statements, it might not even be dishonest.) Society benefits from using the debate algorithm to improve shared maps, but most individual debaters are mostly focused on getting their preferred beliefs onto the shared map.

That's why people don't like concern trolls. If my faction is trying to get Society to adopt beliefs that benefit our faction onto the shared map, someone who comes to us role-playing being on our side, but who is actually trying to stop us from adding our beliefs to the shared map just because they think our beliefs don't reflect the territory, isn't a friend; they're a double agent, an enemy pretending to be a friend, which is worse than the honest enemy we expect to face before the judge in the debate hall.

This vision of factions warring to make Society's shared map benefit themselves is pretty bleak. It's tempting to think the whole mess could be fixed by starting a new faction—the "rationalists"—that is solely dedicated to making Society's shared map reflect the territory: a culture of clear thinking, clear communication, and collaborative truth-seeking.

I don't think it's that simple. You do have interests, and if you can fool yourself into thinking that you don't, your competitors are unlikely to fall for it. Even if your claim to only want Society's shared map to reflect the territory were true—which it isn't—anyone could just say that.

I don't immediately have solutions on hand. Just an intuition that, if there is any way of fixing this mess, it's going to involve clarifying conflicts rather than obfuscating them—looking for Pareto improvements, rather than pretending that everyone has the same utility function. That if something called "rationalism" is to have any value whatsoever, it's as the field of study that can do things like explain why it makes sense that people don't like concern trolling. Not as its own faction with its own weird internal social norms that call for concern trolling as a very strong but defeasible default.

But don't take my word for it.

New to LessWrong?

New Comment
152 comments, sorted by Click to highlight new comments since: Today at 9:53 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]evand1y6628

"Physicist motors" makes little sense because that position won out so completely that the alternative is not readily available when we think about "motor design". But this was not always so! For a long time, wind mills and water wheels were based on intuition.

But in fact one can apply math and physics and take a "physicist motors" approach to motor design, which we see appearing in the 18th and 19th centuries. We see huge improvements in the efficiency of things like water wheels, the invention of gas thermodynamics, steam engines, and so on, playing a major role in the industrial revolution.

The difference is that motor performance is an easy target to measure and understand, and very closely related to what we actually care about (low Goodhart susceptibility). There are a bunch of parameters -- cost, efficiency, energy source, size, and so on -- but the number of parameters is fairly tractable. So it was very easy for the "physicist motor designers" to produce better motors, convince their customers the motors were better, and win out in the marketplace. (And no need for them to convince anyone who had contrary financial incentives.)

But "discourse" is a much more complex target, ... (read more)

-1M. Y. Zuo1y
If discourse has such high dimensionality, compared to motors, how can anyone be confident that any progress has been made at all? Now, or ever?
3Sparr Risher1y
You can describe metrics that you think align with success, which can be measured and compared in isolation. If many / most / all such metrics agree, then you've probably made progress on discourse as a whole.
2tailcalled1y
Has anyone done this? Because I haven't seen this done.
1M. Y. Zuo1y
Metrics are only useful for comparison if they're accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse. Otherwise the incentive will be for the interlocutor, or groups of interlocutors, to pick a few dozen they selectively prefer out of the possibility space of thousands or millions (?). Which nearly everyone else will ignore. The parent comment highlighted the fact that certain metrics measuring motor performance are universally, or near universally, agreed upon because they have a direct and obvious relation with the desired outcome. I can't think of any for discourse that could literally receive 99.XX% acceptance, unlike shaft horsepower or energy consumption.

As someone working on designing better electric motors, I can tell you that "What exactly is this metric I'm trying to optimize for?" is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque. Is that better or worse? There are a ton of things to trade between, and even if your end application is fairly well specified it's generally not specified well enough to remove all significant ambiguity in which choices are better.

It's true that there are some motor designs that are just better at everything (or everything one might "reasonably" care about), but that's true for discourse as well. For example, if you are literally just shrieking at each other, whatever you're trying to accomplish you can almost certainly accomplish it better by using words -- even if you're still going to scream those words.

The general rule is that if you suck relative to the any nebulosity in where on the pareto frontier you want to be, then there are "objective" gains to be made. In motor, simultaneous improvements in efficie... (read more)

5M. Y. Zuo1y
  If you meant to reply to my comment, the point was that there is nothing for discourse that's accepted as widely as torque, magnet strength, copper loss, vibration, etc... A sufficiently large supermajority of engineering departments on planet Earth can agree with very little effort on how to measure torque, for example. But even this scenario is superfluous because there are international standardization bodies that have literally resolved any conflict in interpretation for the fundamental metrics, like those for velocity, mass, momentum, angular momentum, magnetic strength, etc... There's nothing even close to that for discourse.
6jimmy1y
I hear what you're saying. What I'm saying is that as someone whose day job is in large part about designing bleeding edge aerospace motors, I find that the distinction you're making falls apart pretty quickly in practice when I try to actually design and test a "physics motor". Even things as supposedly straight forward as "measuring torque" haven't been as straight forward as you'd expect. A few years ago we took one of our motors to a major aerospace company to test on their dyno and they measured 105% efficiency. The problem was in their torque measurements. We had to get clever in order to come up with better measurements. Coincidentally, I have also put in a ton of work into figuring out how to engineer discourse, so I also have experience in figuring out what needs to be measured, how it can be measured, and how you can know how far to trust your measurements to validate your theories. Without getting too far into it, you want to start out by calibrating against relatively concrete things like "Can I get this person, who has been saying they want to climb up this wall but are too afraid, to actually climb up the rock wall -- yes or no?". If you can do this reliably where others fail, you know you're doing something that's more effective than the baseline (even though that alone doesn't validate your specific explanation uniquely). It'd take a book to explain how to build from there, but at the end of the day if you can do concrete things that others cannot and you can teach it so that the people you teach can demonstrate the same things, then you're probably doing something with some validity to it. Probably. I'm not saying that there's "no difference" between the process of optimizing discourse and the process of optimizing motors, but it is not nearly as black and white as you think. It's possible to lead yourself astray with confirmation bias in "discourse" related things, but you should see some of the shit engineers can convince themselves of without
3M. Y. Zuo1y
Can you link to some examples?
5jimmy1y
Yes, but it's worth pointing out what you can actually expect to get from it, and how easily. Most of what I'm talking about is from personal interactions, and the stuff that's online isn't like "Oh, the science is unanimous, unarguable and unambiguous" -- because we're talking about the equivalent of "physics motors" not "engineering motors". Even if our aerospace lab dyno results were publicly available you'd be right not to trust them at face value. If you have a physics degree then saying "Here's the reasoning, here are the computer simulations and their assumptions, and here's what our tests have shown so far" is easy. If you can't distinguish valid physics from "free energy" kookiness, then even though it's demonstrable and has been demonstrated to those with a good understanding of motor testing validity who have been following this stuff, it's not necessarily trivial to set up a sufficiently legible demonstration for someone who hasn't. It's real, we can get into how I know, but it might not be as easy as you'd like. The thing that proved to me beyond a shadow of a doubt that there exist bright feedback oriented minds that have developed demonstrable abilities involved talking to one over and over and witnessing the demonstrations first hand as well as the feedback cycles. This guy used to take paying clients for some specific issue they wanted resolved (e.g. "fear of heights"), set concrete testable goals (e.g. "If I climb this specific wall, I will consider our work to have been successful"), and then track his success rate over time and as he changed his methods. He used to rack his brain about what could be causing the behavior he'd see in his failures, come up with an insight that helps to explain, play with it in "role play" until he could anticipate what the likely reactions would be and how to deal with them, and then go test it out with actual clients. And then iterate. On the "natural discourse, not obviously connected to deliberate cultivation o
2evand1y
I note that "sufficiently broad" might mean something like "most of LessWrong users" or "most people attending this [set of] meetups". Just as communication is targeted at a particular audience, discourse norms are (presumably) intended for a specific context. That context probably includes things like intended users, audience, goals, and so on. I doubt "rationalist discourse" norms will align well with "televised political debate discourse" norms any time soon. Nonetheless, I think we can discuss, measure, and improve rationalist discourse norms; and I don't think we should concern ourselves overly much with how well those norms would work in a presidential debate or a TV ad. I suspect there are still norms that apply very broadly, with broad agreement -- but those mostly aren't the ones we're talking about here on LessWrong.

I think a disanalogy here is that all motors do in fact follow the laws of physics (and convert electricity into rotation, otherwise we wouldn’t call it a motor). Whereas not all discourse systematically leads people towards true beliefs. So rationalist discourse is a strict subset of discourse in a way that physicist motors is not a strict subset of motors.

In general, I agree that we should be open to the possibility that there exist types of discourse that systematically lead people towards true beliefs, but that look very different from “rationalist discourse” as described by Duncan & Rob. That said, I think I’m less impressed by the truth-finding properties of debates / trials than you are. Like, in both legal trials and high-school debate, the pop-culture stereotype is that the side with a better lawyer / debater will win, not the side that is “correct”. But I don’t really know.

I also agree that it’s worth distinguishing “things that seem empirically to lead to truth-finding for normal people in practice” versus “indisputable timeless laws of truth-finding”.

I was reading “Reward others' good epistemic conduct (e.g., updating) more than most people naturally do.” as like “I... (read more)

1TAG1y
Yep, rationaity is a norm , not a description.
[-]Raemon1y2418

It looks like this post is resting on defining "rationalist" as "one who studies the laws of rationality", as opposed "someone training to think rationally", but, hasn't really acknowledged that it's using this definition (when I think Duncan and Robby's posts seem pointed more at the latter definition)

(Actually, looking more, I think this post sort of equivocates between the two, without noting that it's done so). 

I'm not 100% sure I've parsed this right, but, this looks at first glance like the sort of language trick that you (Zack) are often (rightfully) annoyed at.

(I think it's a reasonable conversational move to point out someone else's definition of a word isn't the only valid definition, and pointing out their frame isn't the only valid frame. But if you're doing that it's better to do that explicitly)

7Zack_M_Davis1y
I'll agree that the "physicist motors" analogy in particular rests on the "one who studies" definition, although I think a lot of the points I make in this essay don't particularly depend on the analogy and could easily be written up separately. I guess you could view the "foreign policy" motivating this post as being driven by two motives: first, I'd rather not waste precious time (in the year 2023, when a lot of us have more important things to do) fighting over the "rationalist" brand name; if someone else who also cares about thinking well, thinks that I'm going about everything all wrong, I think it's fine that we just have our own separate dojos, Archipelago-style. That's why the post emphasizes that there are many types of motors and many types of marital arts. But secondly, insofar as I'm unfortunately stuck fighting over the brand name anyway because words mean what they mean in practice, I really do think that the thing that made middle Yudkowsky (circa 2005–2013) world-changingly valuable was his explanation of there being objective laws of thought (as exemplified by the "Technical Explanation", "The Botttom Line", or "The Second Law of Thermodynamics, and Engines of Cognition"), so when I see the brand name being used to market a particular set of discourse norms without a clear explanation of how these norms are derived from the law, that bothers me enough to quickly write an essay or two about it, even though this is probably not a great use of my time or community-drama-instigating budgets in the year 2023.

so when I see the brand name being used to market a particular set of discourse norms without a clear explanation of how these norms are derived from the law, that bothers me enough to quickly write an essay or two about it

Seems great to me! I share your intuition that Goodwill seems a bit odd to include. I think it's right to push back on proposed norms like these and talk about how justified they are, and I hope my list can be the start of a conversation like that rather than the end.

I do have an intuition that Goodwill, or something similar to Goodwill, plays an important role in the vast majority of human discourse that reliably produces truth. But I'm not sure why; if I knew very crisply what was going on here, maybe I could reduce it to other rules that are simpler and more universal.

2Raemon1y
To be clear, I endorse you doing that, but I would like you to do it without sleight-of-hand-frame-control. (I do agree you could probably have written the second half of the post without relying on the first half's structure, but, that's not what you did) I have on my todo list to write up a post that's like "hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella's post about it, and here's why I think we should have a habit of noticing it.". And then, maybe afterwards, a post going: "Hey, I think 'notice your own frame control, and be a bit careful about it' should graduate to a thing you are obligated to learn, as a good LW citizen. What do people think of that?", and get some sense of how The People think about it. And, depending on how that goes, maybe it becomes an actual LW norm. I haven't done that and doesn't seem fair to rush it or assume how that'll play out, so, currently this is more of a suggestion that I think you should probably agree to on your own terms rather than something I'm enforcing as a moderator, but, flagging that that's a longer term agenda of mine.

In your view, is there an important difference between frame control, and the author having a particular frame that they use in a particular essay?

I'm proud of this blog post. I think it's a good blog post that clearly explains my ideas in a way that's engaging to read. If someone wants to talk about my motivations for writing this post and why I chose the analogies I did, I'm happy to have that discussion in the comment section, like we're doing now.

But it seems to me that a blog post that talked about my objections to Bensinger's Goodwill element, without first explaining the "motors" and "martial arts" analogies as illustrations of how I'm thinking about the topic, would be worse than this post, primarily because it would be less faithful to how I'm thinking about the topic, but also because it would just be less interesting to read.

If someone thinks my choice of analogies (or "frames"; I'm not sure if there's a specific definition of "frame" I'm supposed to be familar with in this context) is misleading for some specific reason, they're welcome to argue that in the comment section. So far, you have not persuaded me that I should have made any different writing choices.

a thing

... (read more)
6Raemon1y
Yep! Distinctions in Frame Control I'm still working through this, which is part of why the post isn't written up yet. I'm also not sure if I'm actually going to use the phrase 'frame control' because it might just be too easy to weaponize in a way that makes it more unhelpful than helpful. (i.e. the concept I have in mind here is something it makes to have the norm of 'notice when you do it, and be careful with it', not 'don't do it ever') But, here are my current thoughts on how I currently carve up the space here: 1. having a frame, at all [i.e. set of ways to conceptualize a problem or solution-space or what questions to ask] 2. having a strongly held/presented frame, such as by speaking confidently/authoritatively (which many people who don't hold their own frames very strongly sometimes find disorienting) 3. having an insistently held frame (where when someone tries to say/imply 'hey, my frame is X' you're like 'no, the frame is Y' and if they're like 'no, it's X') 4. frame manipulation (where you change someone else's frame in a subtle way without them noticing, i.e. presenting a set of assumptions in a way that aren't natural to question, or equivocating on definitions of words in ways that change what sort of questions to think about without people noticing you've done so) #2, #3 and #4 can be mixed and matched.  The places where people tend to use the word 'frame control' most often refer to #3 and #4, frame-manipulation and frame-insistence. I'm a bit confused about how to think about 'strong frames' – I think there's nothing inherently wrong with them, but if Alice is 'weaker willed' than Bob, she may end up adopting his frame in ways that subtly hurt her. This isn't that different from, like, some people being physically bigger and more likely to accidentally hurt a smaller person. I wouldn't want society to punish people for happening-to-be-big, but it feels useful to at least notice 'bigness privilege' sometimes. That said, strongly held fr
6Zack_M_Davis1y
I'm definitely doing #2. I can see your case that the paragraph starting with "But there's a reason for that" is doing #4. But ... I'm not convinced that this kind of "frame manipulation" is particularly bad? If someone is unhappy with the post's attempt to "grab the frame" (by acting as if my conception of rationalist is the correct one), I'm happy to explain why I did that in the comments. Do I have to disclaim it in the post? That just seems like it would be worse writing.
2Raemon1y
I think in isolation it wouldn't be particularly bad, no. I think it'd rise to the level of 'definitely better to avoid' (given [probably?] shared assumptions about truthseeking and honesty), but, it's within the set of mistakes I think are fairly normal to make. I feel like it is part of a broader pattern that (I think probably) adds up to something noticeably bad, but it'd take me awhile of active effort to find all the things that felt off to me and figure out if I endorse criticizing it as a whole.  (So, like, for now I'm not trying to make a strong argument that there's a particular thing that's wrong, but, like, I think you have enough self-knowledge to notice 'yeah something is off in a sticky way here' and figure it out yourself. ((But, as previously stated, I don't have a strong belief that this makes sense to be your priority atm)))
2Raemon1y
Oh, also to clarify, in my current view, you don’t need to tack on the ‘because [reasons]’ to avoid it being frame manipulation. Simply noting that you think it makes more sense to use a different definition is enough to dispel the sleight of hand feeling. (Although listing reasons may make it more persuasive that people use this definition rather than another one)
2Said Achmiz1y
Are these two things not intimately connected? Should we not study the laws of rationality in the course of training to think rationally (indeed, in the course of determining what it means to think rationally, and determining how to train to think rationally)? And what is the point of studying the laws of rationality, if not to apply them?
4Raemon1y
I certainly do think they're connected, but, they are still distinct concepts, and I think part of the reason Zack is focused on "rationalists as students of the laws of rationality" vs "applicants thereof" is that a community of law-studiers should behave differently. (as I understand it, pure math and applied math are different and people make a big deal about it??) (and, to be clear, I think this is a pretty interesting consideration I hadn't been thinking of lately. I appreciate Zack bringing it up, just want him to not be slight-of-handy about it)
4Said Achmiz1y
Hmm, but I don’t think that rationality is actually analogous to math, in this respect. I think that the intimate connection between learning and applying rationality is, actually, a core property of rationality as a domain, as distinct from domains like math. Any disconnect between study and application threatens to undermine both!
2Vladimir_Nesov1y
The beauty of the subject!

Inflation of "rationality" needs more specific anchors to combat it. As it stands, any purpose that looks good for someone (especially if it's actually quite good) stands a risk of getting enshrined into a "principle of rationality", such that following that principle advances the purpose, while dismissing the principle starts sounding "irrational", a norm violation if there is one in a garden of rationality, worth discouraging.[1]

I think Scott's asymmetric weapons framing gestures at the concept/problem more robustly, while Eliezer's cognitive algorithms framing gives practical course-correcting advice:

Similarly, a rationalist isn't just somebody who respects the Truth.
All too many people respect the Truth.
A rationalist is somebody who respects the processes of finding truth.

At the moment, LW has accumulated enough anti-epistemology directed at passing good and sensible things for rationality that a post like this gets rejected on the level of general impression. I think a post focused on explaining the problem with unreflectively rejecting posts like this, or on stratifying meaningful senses of "rationality" as distinct from all things good and true, without simultaneously ... (read more)

If your discourse algorithm errs on the side of sticks over carrots (perhaps, emphasizing punishing others' bad epistemic conduct more than most people naturally do), then ... what? How, specifically, are rough-and-tumble spaces less "rational", more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?

In my mind the Goodwill norm has a straightforward justification: Absent goodwill, most people are prone to view disagreement as some sort of personal hostility, similar to an insult. This encourages us to view their arguments as soldiers, rather than as exchange of evidence. Which leads to a mind-killing effect, i.e. it makes us irrational.

To be sure, I think that some groups of people, particularly those on the autism spectrum, do not have a lot of this "hostility bias". So the Goodwill norm is not very applicable on platforms where many of those people are. Goodwill is likely a lot more important on Twitter than on Hacker News or Less Wrong.

In general, norms which counter the effect of common biases seem to be no less about rationality than norms which have to do more directly with probability or decision theory.

6Said Achmiz1y
This seems to imply a norm of “don’t assume hostility”, rather than a norm of “express good will”. The key difference is that the former moves you closer to the truth, by negating a bias, whereas the latter is, at best, truth-neutral (and quite likely truth-distortionary).
3cubefox1y
Showing goodwill is better than just not assuming hostility, since it also makes your opposite less likely to assume hostility themselves.
8Said Achmiz1y
But showing goodwill is worse than just not assuming hostility, because it requires that you communicate something other than whatever facts/views/models/etc. you are trying to communicate—and that’s the best case. The more common, and much worse, case is when your communication of goodwill is basically a lie, or involves other distortions of truth. Consider the quoted guideline: But why should we err at all? Should we not, rather, use as many carrots and sticks as is optimal? Should we not forgive only and precisely when forgiveness is warranted, and punish only and precisely when (and in the way that) punishment is warranted? Civility is a fine and good default, but what if incivility is called for—should we nonetheless err on the side of civility? Why? The justification given is “empirically, this works”. Well, we may believe that or disbelieve it, but either way, the core of the approach here is “distort truth, in the service of truth”. That alone makes the justification not “straightforward” at all.
6dxu1y
Because we cannot choose not to err. So, given that we will err, and given that we err with asymmetric frequency in a particular direction—(and given that errors in that direction also tend to have asymmetrically worse consequences)—then naturally, better to compensate for that with a push in the opposite direction, than to compensate for it not at all. Straightforward enough, in my estimation!
8Said Achmiz1y
The correct approach, it seems to me, is to determine in which direction we are in fact erring, and to rectify that error by adjusting in the opposite direction, as much as is necessary in order to counteract the error (and no more—for that, too, would be an error). But you seem to be suggesting first (a) surrendering to inevitable error, not even trying to not err, and (b) correcting, not by precisely as much as is necessary (or some attempt at approximating that amount), but simply by… some arbitrary amount (trusting that it’s enough? trusting that it’s not too much?). This would seem to be a poor principle to elevate to the rank of “guideline for rational discourse”.
[-]dxu1y124

But you seem to be suggesting first (a) surrendering to inevitable error, not even trying to not err

Certainly not. Recalibrating one's intuitions to better reflect reality is an admirable aim, and one in which we should all be engaged. However, as far as norms of discourse go, there is more to the matter than that: different people will unavoidably have differences of intuition regarding their interlocutor's goodwill, with certain individuals quicker to draw the line than others. How best to participate in (object-level) discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.

(b) correcting, not by precisely as much as is necessary (or some attempt at approximating that amount), but simply by… some arbitrary amount (trusting that it’s enough? trusting that it’s not too much?).

One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.

This does not, however, mean that we are left in the dark, with no recourse but to ... (read more)

4Said Achmiz1y
Sure, but why should this question have an answer like “we just can’t not err, or even reduce how much we err”? Why would we expect this? Also (and perhaps more importantly): Hold on, hold on. How did we get to “intuitions regarding their interlocutor’s goodwill”? We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”. Why are we suddenly having to have “goodwill”, to try to divine how much “goodwill” other people have, etc.? We identified a problem and then we identified the solution. Seems like we’re done. How is that a consequence of “being the type of agent that errs at all”? I don’t see it—please elaborate. Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so. The rest of your comment, from that point, seems to continue conflating perception of others’ behavior, and one’s own behavior. I think it would be good to disentangle these two things.
6dxu1y
It seems possible at this point that some of our disagreement may stem from a difference in word usage. When I say "goodwill" (or, more accurately, when I read "goodwill" in the context of Rob Bensinger's original post), what I take it to mean is something along the lines of "being (at least in the context of this conversation, and possibly also in the broader context of participation on LW as a whole) interested in figuring out true things, and having that as a primary motivator during discussions". The alternative to this (which your use of "hostility" appears to qualify for as a special case) is any situation in which that is not the case, i.e. someone is participating in the discussion with some other aim than arriving at truth. Possible alternative motivations here are too numerous to list comprehensively, but (broadly speaking) include classes such as: wanting confirmation for their existing beliefs, wanting to assert the status of some individual or group, wanting to lower the status of some individual or group, etc. (That last case seems possibly to map to your use of "hostility", where specifically the individual or group in question includes one of the discussion's participants.) This being the case, my response to what you say in your comment, e.g. here and here is essentially that I agree, but that I don't see how (on your view) Rob's proposed norm of "assuming goodwill" isn't essentially a restatement of your "don't perceive disagreements as hostility". (Perhaps you think the former generalizes too much compared to the latter, and take issue with some of the edge cases?) In any case, I think it'd be beneficial to know where and how exactly your usage/perception of these terms differ, and how those differences concretely lead to our disagreement about Rob's proposed norm.
4Rob Bensinger1y
"Err on the side of X" here doesn't mean "prefer erring over optimality"; it means "prefer errors in direction X over errors in the other direction". This is still vague, since it doesn't say how much to care about this difference; but it's not trivial advice (or trivially mistaken).
4Said Achmiz1y
Yes, I know what the expression means. But that doesn’t answer the objection, which is “why are we concerning ourselves with the direction of the errors, when our objective should be to not have errors?” The actual answer has already been given elsethread (a situation where changing the sign of the error is substantially easier than reducing magnitude of error, plus a payoff matrix that is asymmetric w.r.t. the direction of error).
2jimmy1y
I agree that "Err on the side of ____" is technically worse than "Try to not err", but I'd argue that it's just a somewhat sloppy and non-literal way of conveying a valid point. The way I'd say it if being more careful is to say that "Insufficient and excess stick are both problems,  however there is a natural tendency to stick too much. Additionally, excess stick quickly begets excess stick, and if you allow things to go supercritical you can quickly destroy the whole thing, so before acting on an impulse to stick make really sure that you aren't just succumbing to this bias.".  Or in other words "Your sights are likely out of alignment, so aim higher than seems appropriate in order to not aim too low in expectation".
2Measure1y
Sometimes "try not to err" will result in predictably worse outcomes than "try to minimize the damage your erring causes, even if that means you are more likely or even certain to err".
5jimmy1y
Agreed. You want to "try not to err" in expected value, not in "inches from the bullseye". Sometimes this means you try to put the center of your distribution offset from the bullseye. I didn't see it as the primary point of contention so I didn't mention it, but you're right, it's probably worth pointing out explicitly.
2Said Achmiz1y
What are some examples of this?

Ideally, I would arrive at my workplace exactly when my shift starts (zero error, zero loss). But if I'm ten minutes late, I get in trouble with my boss (small error, large loss), and if I'm ten minutes early, I read a magazine in the breakroom (small error, small loss). Therefore, I should "err on the side of" leaving early.

That is, the "err on the side of" idiom arises from the conflation of different but related optimization problems. The correct solution to the worker's full problem (taking into account the asymmetrical costs of arriving early or late) is an incorrect solution to the "being (exactly) on time" problem.

5Said Achmiz1y
I see, thanks. Do you think that this dynamic appear in the problem which is the subject of the top-level discussion here?

Yes. If my comments are too mean, I might start an unpleasant and unproductive flame war (small error, large loss). If my comments are too nice, they might be slightly less clear than a less nice comment, but nothing dramatically bad like a flame war happens (small error, small loss). Therefore I (arguably) should "err on the side of carrots over sticks."

If "Elements of Rationalist Discourse"'s Goodwill item had explicitly laid out the logic of asymmetric costs rather than taking "err on the side of" as a primitive, I'd still be skeptical, but this post's discussion of it wouldn't be written the same way (and it's possible that I might not have bothered to write the post at all).

1Said Achmiz1y
Doesn’t this assume that the cost of writing comments does not vary with “niceness” of the comments (e.g., because it is zero)?
7Zack_M_Davis1y
That's one reason someone might object to the asymmetrical-costs argument for niceness, but I'm skeptical that it's the real reason. I think what's more typically going on is that there's a conflict between people who want to enforce politeness norms and people who want the freedom to be blunt. In venues where the polite faction has the upper hand (by karma voting weight, moderator seats, &c.), blunt people have an incentive to dishonestly claim that writing polite comments is more expensive than it actually is, because polite voters and moderators might be partially swayed by that argument, whereas the polite people would not be sympathetic if the blunt people said what they were actually thinking.

In venues where the polite faction has the upper hand (by karma voting weight, moderator seats, &c.), blunt people have an incentive to dishonestly claim that writing polite comments is more expensive than it actually is, because polite voters and moderators might be partially swayed by that argument, whereas the polite people would not be sympathetic if the blunt people said what they were actually thinking.

Of course this is true, but that doesn’t actually mean that there isn’t, in fact, a cost differential; it only means that claims of such constitute weaker evidence in favor than they would in the absence of such an incentive.

And there are good reasons to believe that the cost differential exists. We may presumably discount (alleged) evidence from introspection, as it’s unreliable for two reasons (unreliability of introspection in the presence of incentives for self-deception; unreliability of reports of introspection, in the presence of incentives for deception). But that’s not all we’ve got. For example, in the linked comment, you write:

Like, it didn’t actually take me very much time to generate the phrase “accountability for alleged harm from simplifications” rather

... (read more)
-1SomeoneYouOnceKnew1y
Not gonna lie, I lost track of the argument on this line of comments, but pushing back on word-bloat is good.
[-]jimmy1y2110

I'll go on record as a counterexample here; I very much want politeness norms to be enforced here, and in my personal life I will pay great costs in order to preserve or create my freedom to be blunt. The requirement for me to be cautious of how I say things here is such a significant cost that I post here far less than I otherwise would. The cost is seriously non-insignificant.

The reason I don't bitch about it is that I recognize that it's necessary. Changing norms to allow people to be relatively more inconsiderate wouldn't actually make things better. It's not just that "pandering to idiots" calls for a euphemism, it's that it probably calls for a mindset that is not so dismissive to people if they're going to be in or close enough to your audience to be offended. Like, actually taking them into consideration and figuring out how to bridge that gap. It's costly. It's also necessary, and often pays off.

I would like to be able to say "Zack, you stupid twat" without having to worry about getting attacked for doing so, but until I've proven to you that I respect you enough that it's to be taken as an affectionate insult between friends.... phrasing things that way wouldn't actually ... (read more)

(Worth noting that I used to spend a great deal of effort and energy on putting up with the headache of wading through Zack's style, for the occasional worth-it nugget of insight; the moment when that constant expenditure of effort became clearly not worth it anymore was when Zack started off a(n also-otherwise-flawed) critique by just asserting "This is insane."

Even if it had, in fact, been insane, Zack would've been more effective if he'd been willing to bother with even the tiniest of softenings (e.g. "this sounds insane to me," which, in addition to being socially smoother is also literally more true, as a reflection of the actual state of affairs).

As it was, though, he was just so loudly + overconfidently + rudely wrong that it was enough to kill my last remaining willingness-to-tolerate his consistent lack-of-epistemic-hygiene-masquerading-as-a-preference-for-directness.)

Would it help if I apologized? I do, actually, regret that comment. (As you correctly point out here, it wasn't effective; it didn't achieve my goals at all.)

The reason I was reluctant to apologize earlier is because I want to be clear that the honest apology that I can offer has to be relatively narrowly-scoped: I can sincerely apologize specifically for that particular pointlessly rude blog comment, and I can sincerely make an effort to conform to your preferred norms when I'm writing a reply to you specifically (because I know that you specifically don't like the punchy-attacky style I often use), but I'm not thereby agreeing to change my commenting behavior when talking to people who aren't you, and I'm not thereby agreeing that your concept of epistemic hygiene is the correct one.

I'm worried that a narrowly-scoped apology will be perceived as insufficient, but I think being explicit about scope is important, because fake apologies don't help anyone: I only want to say "I'm sorry; I won't do it again" about the specific things that I'm actually sorry for and actually won't do again.

So—if it helps—I hereby apologize for my comment of 4 December 2021 on an earlier draft of "Basic... (read more)

Even if it had, in fact, been insane, Zack would've been more effective if he'd been willing to bother with even the tiniest of softenings (e.g. "this sounds insane to me," which, in addition to being socially smoother is also literally more true, as a reflection of the actual state of affairs).

Softening like this is one of those annoying things i wish we could do away with because it's smurf naming. Saying that something is insane is literally a claim that I think it's insane, and it's only because of naive epistemology that we think some other meaning is possible.

I only started adding softening because Duncan wouldn't shut up about the lack of smurfs in my comments.

9philh1y
But Duncan's suggested softening was "this sounds insane to me", not "I think this is insane". Like, consider the dress. We might imagine someone saying any of 1. "The dress is (white/blue)." 2. "I think the dress is (white/blue)." 3. "The dress looks (white/blue) to me." I think that in practice (1) and (2) mean different things; on a population level, they'll be said by people whose internal experiences are different, and they'll justly be interpreted differently by listeners. But even if you disagree with that, surely you'd agree that (3) is different? Like, "I think the dress is white but it's actually blue" is admittedly a kind of weird thing to say, but "the dress looks white to me but it's actually blue" is perfectly normal, as is "...but I think it's actually blue", or "...but I don't know what color it actually is". It may be that the dress looks blue to you and also you think it's actually blue, but these are two importantly different claims! I would further suggest that if the dress happens to look blue to you, but also you're aware that it looks blue to a lot of people and white to a lot of people, and you don't know what's going on, and you nonetheless believe confidently that the dress is blue, you are doing something wrong. (Even though you happen to be correct, in this instance.) When it comes to insanity, I think something similar happens. Whether or not something sounds insane to me is different from whether it actually is insane. Knowing that, I can hold in my head ideas like "this sounds insane to me, but I might be misinterpreting the idea, or I might be mistaken when I think some key premise that it rests on is obviously false, or or or... and so it might not be insane". And so we might stipulate that Zack's "this is insane" was no more or less justifiable than "I think this is insane" would have been. But we should acknowledge the possibility that in thinking it insane, he was doing something wrong; and that thinking and saying "thi
4Gordon Seidoh Worley1y
My point is that something cannot actually be insane, it can only be insane by some entity's judgment. Insanity exists in the map, not the territory. In the territory there's just stuff going on. We're that ones that decide to call it insane. Maybe that's because there's some stable pattern about the world we want to put the label insanity on, and we develop some collective agreement about what things to call insane, but we're still the ones that do it. If you take this view, these statements don't have much difference between them on a fundamental level because "The dress is X" means something like "I assess the dress to be X" since you're the one speaking and are making this call. We do have things that mean something different, like "I think other people think the dress is X", but that's making a different type of claim than your 3 statements, which I see as making essentially the same fundamental claim with minor differences about how its expressed to try to convey something about the process by which you made the claim so that others can understand your epistemic state, which is sometimes useful but you can also just say this more directly with something like "I'm 80% sure the dress is X".
4[DEACTIVATED] Duncan Sabien1y
A big part of what I'm often doing in my head is simulating a room of 100-1000 people listening, and thinking about what a supermajority of them are thinking or concluding.  When you go from e.g. "that sounds insane to me" or "I think that's crazy" to "that is crazy," most of what I think is happening is that you're tapping into something like "...and 70+ out of 100 neutral observers would agree." Ditto with word usage; one can use a word weirdly and that's fine; it doesn't become a wrong usage until it's a usage that would reliably confuse 70+% of people/reliably cause 70+% of people to conclude the wrong thing, hearing it. "Wrong in the territory" in this case being "wrong in the maps of a supermajority" + "it's a socially constructed thing in the first place."
2philh1y
I'm baffled by this, and kinda just going to throw a bunch of reactions out there without trying to build them into a single coherent reply. If someone says "the dress looks white to me, but I think it's actually blue"... how would you analyze that? From this it sounds like you'd think they're saying "I assess the dress to be white, but I assess it to be blue"? To me it has a perfectly natural meaning, along the lines of "when I look at this picture my brain tells me that it's white. But I'm reliably informed that it's actually blue, and that the white appearance comes from such-and-such mental process combined with the lighting conditions of the photo". (e: actually, "along the lines of" isn't quite what I mean there. It's more like "this is the kind of thing that might cause someone to say those words".) ---------------------------------------- It sounds to me like you're trying to say there's, on some level, no meaningful distinction between how something is and how we assess it to be? But how something appears to be and how we assess it to be are still very different! ---------------------------------------- But "I'm 80% sure the dress is X" doesn't convey anything about the process by which I came to believe it? It's simply a conclusion with no supporting argument. Meanwhile "the dress looks X" is an argument with no ultimate conclusion. If a person says that and nothing else, we might reasonably guess that they probably think the dress is X, similar to how someone who answers "is it going to rain?" with "the forecast says yes" probably doesn't have any particular grounds to disbelieve the forcast. But even if we assume correctly that they think that, both the explicit and implicit information they've conveyed to us are still different versus "I'm _% confident the dress is X" or "I'm _% confident it's going to rain".
2[DEACTIVATED] Duncan Sabien1y
Words mean what they mean, in practice. In practice, humans (en masse) assign genuinely different weights/strengths to "This is insane" and "This sounds insane to me." The response shows that they are meaningfully different. I agree (?) with you (assuming you concur with the following) that it would be nice if we had better and more functional terminology, and could make clearer distinctions without spending words that do indeed feel extraneous. But that's not the world we live in, and given the world we live in, I disagree that it's smurf naming.
2Ninety-Three1y
I agree that people hearing Zack say "I think this is insane" will believe he has a lower P(this is insane) than people hearing him say "This is insane", but I'm not sure that establishes the words mean that. If Alice goes around saying "I'm kinda conservative" it would be wise to infer that she is probably conservative. If Bob goes around saying "That's based" in the modern internet sense of the term, it would also be wise to infer that he is probably a conservative. But based doesn't mean Bob is conservative, semantically it just means something like "cool", and then it happens to be the case that this particular synonym for cool is used more often by conservatives than liberals. If it turned out that Alice voted party line Democrat and loved Bernie Sanders, one would have a reasonable case that she had used words wrong when she said she was kinda conservative, those words mean basically the opposite of her circumstances. If it turned out that Bob voted party line Democrat and loved Bernie Sanders, then one might advise him "your word choice is causing people to form a false impression, you should maybe stop saying based", but it would be weird to suggest this was about what based means. There's just an observable regularity of our society that people who say based tend to be conservative, like how people who say "edema" tend to be doctors. If Zack is interested in accurately conveying his level of confidence, he would do well to reserve "That's insane" for cases where he is very confident and say "That seems insane" when he is less confident. If he instead decided to use "That's insane" in all cases, that would be misleading. But I think it is significant that this would be a different kind of misleading than if he were to use the words "I am very confident that is insane", even if the statements cause observers to make the exact same updates.
2[DEACTIVATED] Duncan Sabien1y
(My point in the comment above is merely "this is not contentless filler; these distinctions are real in practice; if adding them feels onerous or tedious it's more likely because one is blind to, or does not care about, a real distinction, than because there's no real difference and people want you to waste time adding meaningless words." A lot of people act along lines that go something like "well these words SHOULD be taken to mean X, even though they predictably and reliably get interpreted to mean Y, so I'm going to keep saying them and when other people hear 'Y' I'll blame them, and when other people ask me to say something different I will act put-upon." <—That's a caricature/extremer version of the actual position the actual Gordon takes; I'm not claiming Gordon's saying or doing anything anywhere near that dumb, but it's clear that there really are differences in how these different phrases are perceived, at the level of hundreds-of-readers.)
3Ninety-Three1y
Is it wrong for Bob the Democrat to say "based" because it might lead people to incorrectly infer he is a conservative? Is it wrong for Bob the plumber to say "edema" because it might lead people to incorrectly infer he is a a doctor? If I told Bob to start saying "swelling" instead of "edema" then I feel like he would have some right to defend his word use: no one thinks edema literally means "swelling, and also I am a doctor" even if they update in a way that kind of looks like it does. I don't think we have a significant disagreement here, I was merely trying to highlight a distinction your comment didn't dwell on, about different ways statements can be perceived differently. "There is swelling" vs "There is swelling and also I am a doctor" literally means something different while "There is swelling" vs "There is edema" merely implies something different to people familiar with who tends to use which words.
2[DEACTIVATED] Duncan Sabien1y
Yes, but I don't think this is particularly analogous, specifically because the difference in interpretation, in practice, between "swelling" and "edema" seems to me like it's likely at least an order of magnitude smaller than the difference in interpretation, in practice, between "this is crazy" and "this sounds crazy to me." As for whether either of these usages are wrong, it depends entirely on whether you want to successfully communicate or not. If you reliably cause your listener to receive concepts that are different than those you were trying to transmit, and this is down to utterly predictable boring simple truths about your language usage, it's certainly your call if you want to keep doing a thing you know will cause wrong beliefs in the people around you. Separately, 100% of the people I've encountered using the word "based" are radical leftist transfolk, and there are like twelve of them?
1Ninety-Three1y
I understood "based" to be a 4chan-ism but I didn't think very hard about the example, it is possible I chose a word  that does not actually work in the way I had meant to illustrate. Hopefully the intended meaning was still clear.

I think I should just add my own data point here, which is that Zack and I have been on polar opposites sites of a pretty emotional debate before, and I had zero complaints about their conduct. In fact ever since then, I think I'm more likely to click on a post if I see that Zack wrote it.

2Zack_M_Davis1y
Thanks for chiming in; this is encouraging to hear. I'm imagining the pretty emotional debate you're thinking of is the one on "My Dating Plan ala Geoffrey Miller" in July 2020? Interestingly, I think my behavior there was much ruder than anything Duncan's objected to from me, so I think your reaction is evidence that there's a lot of interpersonal variation in how much "softening" different people think is desirable or necessary.
4Rafael Harth1y
It was that general debate about content moderation. Pretty sure it wasn't all in the comments of that post (though that may have been the start); I don't remember the details. It's also possible that my recollection includes back and forth you had with [other people who defended my general position].
8jimmy1y
I'm confused. It seems to me that you, Zack, and I all have a similar takes on the example you bring up, but the fact that you say this here suggests that you don't see us all as in clear agreement?

I don't see us all as in clear agreement; I think we're at least somewhat in nominal agreement but I have found Zack to be ... I don't mean this as a contentless insult, I mean it as a literal attempt-to-model ... irrationally fixated on being anti-polite, and desperately fending off attempts to validate or encode any kind of standard or minimum bar of politeness.

By "irrationally" I mean that he seems to me to do so by irresistible reflex, with substantial compulsion/motive force, even when the resulting outcome is unambiguously contra his explicitly stated goals or principles.

To put things in Zack's terminology, you could say that he's (apparently) got some kind of self-reinforcing algorithmic intent to be abrasive and off-putting and over-emphatic. Even where more reserved language would be genuinely truer, less misleading to the audience, and more in line with clear and precise word usage (all goals which Zack ostensibly ranks pretty high in the priority list), there's (apparently) some kind of deep psychological pressure that reliably steers him in the other direction, and makes him vehemently object to putting forth the (often pretty minimal) effort required.

Similarly, even wh... (read more)

FYI, having recently stated "man I think Duncan and Zack should be seeing themselves more as allies", I do want to note I agree pretty strongly with this characterization. I think Zack probably also agrees with the above during his more self-aware moments, but often not in the middle of a realtime discussion. 

I do think Zack should see this fact about himself as a fairly major flaw according to his own standards, although it's not obvious to me that the correct priority for him should be "fixing the surface-visible-part of the flaw", and I don't know what would actually be helpful. 

My reasoning for still thinking it's sad for Zack/Duncan to not see each other more as allies routes primarily through what I think 'allyship' should mean, given the practicalities of the resources available in the world. I think the people who are capable of advancing the art of rationality are weird and spiky and often come with weird baggage, and... man, sorry those are the only people around, it's a very short list, if you wanna advance the art of rationality you need to figure out some way of dealing with that (When I reflect a bit, I don't actually think Duncan should necessarily be doing... (read more)

4Zack_M_Davis1y
I also think it's sad that Duncan and I apparently can't be allies (for my part, I like a lot of Duncan's work and am happy to talk with him), but I think there's a relevant asymmetry. When my weird baggage leaks into my attempted rationality lessons, I think there's a corrective mechanism insofar as my weird baggage pushes me to engage with my critics even when I think they're being motivatedly dumb: if I get something wrong, gjm will probably tell you about it. Sometimes I don't have time to reply, but I will never, ever ban gjm from commenting on my posts, or insist that he pre-emptively exert effort trying to think of reasons that he's the one who's missing something, or complain that interacting with him doesn't feel cooperative or collaborative. When Duncan's weird baggage leaks into his attempted rationality lessons, I think there's much less of a corrective mechanism insofar as Duncan feels free to ban critics that he thinks are being motivatedly dumb. If Duncan's judgements about this are correct, he saves a lot of time and emotional energy that he can spend doing other things. (I'm a bit jealous.) But if his judgements are ever wrong, he loses a chance to discover his mistakes. Of course, I would say that! (The two paragraphs I just typed were clearly generated from my ideology; someone else with a different way of thinking might be able to think of reasons why I'm wrong, that I can't see by myself.) If you do, I hope you'll let me know in the comments!
3[DEACTIVATED] Duncan Sabien1y
This whole comment is a psy-op. It was a mistake for me to leave a comment up above in the first place, and I came to my senses and deleted it literally less than a minute after I hit "enter," but that didn't stop Zack from replying twenty minutes later and now we have a thread so fine, whatever. When Zack's weird baggage leaks into his attempted rationality lessons, he calls people insane and then writes multi-thousand-word screeds based on his flawed interpretations, which he magnanimously says the other person is perfectly welcome to correct! leaving people the following options: * Spend hours and hours of their scant remaining lifetimes laboriously correcting the thousand-yard sprint he already took down the wrong trailhead, or * Leave his uncanny valley misinterpretation there, unaddressed, where it will forever anchor subsequent interpretations, pulling them toward an attractor, and also make the author seem churlish or suspiciously unable-to-rebut (which lends his interpretation further apparent strength) ... which makes being here exhausting and intolerable. Zack could, of course, just not do this. It's entirely within his power! He could (for instance), when he forms a knee-jerk interpretation of someone else's statement that he finds crazy or upsetting, simply ask whether that was the interpretation the author intended, before charging full-steam ahead with a preemptive critique or rebuttal. (You know, the way you would if you were here to collaborate, and had a shred of respect for your interlocutors.) This is even easier! It requires less effort! It doesn't require e.g. being charitable, which for some reason Zack would rather die than do. But Zack does not do this, because, for whatever reason, Zack values [preserving his god-given right to be a jump-to-conclusions asshole] over things like that. He'll claim to be sad about our inability to communicate well, but he's not actually sad enough to cut it out, or even just cut back a little. (I thi

Thanks for your thoughts. (Strong-upvoted.)

the essay where I finally gave up [...] This was not an aberrant event. [...] one more straw on the camel's back

Yes, that December 2021 incident was over the line. I'm sorry. In retrospect, I wish I hadn't done that—but if I had taken a few more moments to think, I would have been able to see it without retrospect. That was really stupid of me, and it made things worse for both of us.

You're also correct to notice that the bad behavior that I don't endorse on reflection can be seen as a more extreme version of milder behavior that I do endorse on reflection. (Thus the idiom "over the line", suggesting that things that don't go over the line are OK.) I wish I had been smart enough to only do the mild version, and never overshoot into the extreme version.

ignoring the text wherein his objection was, in fact, addressed

Are you referring to the paragraph that begins, "If two people disagree, it's tempting for them to attempt to converge with each other [...]"? In a comment to Phil H., I explained why that paragraph didn't satisfy me. (Although, as I acknowledged to Phil, it's plausible that I should have quoted and acknowledged that parag... (read more)

(I think I would have noticed if that paragraph had been in the December 2021 version, but if you say it was, I'll take your word for it—which would imply that my December 2021 behavior was even worse than I've already admitted; I owe you a much bigger apology in that case.)

It was. That's why I was (and remain) so furious with you (Edit: and also am by default highly mistrustful of your summaries of others' positions).

6Zack_M_Davis1y
Thanks for telling me (strong-upvoted). That makes sense as a reason for you to be furious with me. As the grandparent says, I owe you a bigger apology than my previous apology, which appears below. I hereby apologize for my blog comment of 4 December 2021, on an earlier revision of "Basics of Rationalist Discourse". In addition to the reasons that it was a bad comment in context that I listed in my previous apology, it was also a bad comment for failing to acknowledge that the text of the post contained a paragraph addressing the comment's main objection, which is a much more serious error. I am embarrassed at my negligence. To avoid such errors in the future, I will endeavor to take some time to emotionally cool down and read more carefully before posting a comment, when I notice that I'm tempted to post a comment while emotionally activated. If you'd like me to post a variation of this in a more prominent location (like Facebook or Twitter), I'd be willing to do that. (I think I'd want to spend a few more minutes to rewrite the lesser reasons that the comment was bad in context as its own sentence, rather than linking to the previous apology.)
1[DEACTIVATED] Duncan Sabien1y
I don't know what to say in response.  Empirically, this apology did zero to reduce the extremely strong deterrent of "God dammit, if I try to post something on LessWrong, one way or another Zack and Said are going to find a way to make that experience miserable and net negative," which, in combination with the energy that this thread burned up, has indeed resulted in me not posting, where counterfactually I would've posted three essays. (I'm only here now because you're bumping the threads.) (Like, there are three specific, known essays that I have not posted, because of my expectations coming off of this thread and the chilling effect of "I'll have to deal with Zack and Said's responses.") (Also the reason my Basics post ended up being so long-winded was because, after my experience with the partial draft going up by mistake, I was trying quite hard to leave a future Zack no ways to make me regret publishing/no exposed surfaces upon which I could be attacked. I ended up putting in about 20 extra hours because of my past experience with you, which clearly did not end up paying off; I underestimated just how motivated you would be to adversarially interpret and twist things around.) I tried blocking, and that wasn't enough to get you to leave me alone.   Sounds like you win.
4[DEACTIVATED] Duncan Sabien1y
Literally only you and Said have these twin problems (among long-lasting prolific LW participants). This is you saying "but but but if you claim ZERO is too little and a BILLION is too much, then how is there any room for legitimate criticism to exist?" It's somewhere between zero and a billion, like every other person on LessWrong manages to do just fine all the time. Late edit: we have a term for this thing; it's called "fallacy of the grey."

Literally only you and Said have these twin problems (among long-lasting prolific LW participants). This is you saying "but but but if you claim ZERO is too little and a BILLION is too much, then how is there any room for legitimate criticism to exist?"

It's somewhere between zero and a billion, like every other person on LessWrong manages to do just fine all the time.

I think it's important to note survivorship bias here; I think there are other people who used to post on LessWrong and do not anymore, and perhaps this was because of changes in norms like this one.[1] It also seems somewhat likely to me that Said and Zack think that there's too little legitimate criticism on LW. (I often see critical points by Zack or Said that I haven't yet seen made by others and which I agree with; are they just faster or are they counterfactual? I would guess the latter, at least some of the time.)

As well, Zack's worry is that even if the guideline is written by people who have a sense that criticism should be between 4 and 12, establishing the rule with user-chosen values (like, for example, LW has done for a lot of post moderation) will mean there's nothing stopping someone from deciding that c... (read more)

7[DEACTIVATED] Duncan Sabien1y
I don't think I am making this request. I do strongly predict that if I made free to verbally abuse Zack in the same fashion Zack verbally abuses others, I would be punished more for it, in part because people would be like "well, yeah, but Zack just kinda is like that; you should do better, Duncan" and in part because people would be like "DUDE, Zack had a traumatic experience with the medical system, you calling him insane is WAY WORSE than calling someone else insane" and "well, if you're not gonna follow your own discourse rules, doesn't that make you a hypocrite?" It's an asymmetric situation that favors the assholes; people tend not to notice "oh, Duncan rearmed with these weapons he advocates disarming because his interlocutors refused to join the peace treaty." 
5Vaniver1y
Sure, I buy that any functional garden doesn't just punish hypocrisy, but also failing to follow the rules of the garden, which I'm imputing as a motivation for your second and third paragraphs. (I also buy that lots of "let people choose how to be" approaches favor assholes.) But... I think there's some other message in them, that I can't construct correctly? It seems to me like we're in a broader cultural environment where postmodern dissolution of moral standards means the only reliable vice to attack others for is hypocrisy. I see your second and third paragraphs as, like, a mixture of disagreeing with this ('I should not be criticized for hypocrisy as strongly as I predict I would be if I were hypocritical') and maybe making a counteraccusation of hypocrisy ('if there were evenly applied standards of conduct, I would be protected from Zack's misbehavior, but as is I am prevented from attacking Zack but the reverse is not true'). But I don't think I really agree with either of those points, as I understand them. I do think hypocrisy is a pretty strong argument against the proposed rules, and also that double standards can make sense (certainly I try to hold LW moderators to higher standards than LW users).
3[DEACTIVATED] Duncan Sabien1y
I'm saying: "I'd like for us to not have a culture wherein it's considered perfectly kosher to walk around responding to other users' posts with e.g. 'This is insane' without clearing a pretty high bar of, y'know, the thing actually being insane. To the extent that Zack is saying 'hey, it's fine, you can verbally abuse me, too!' this is not a viable solution." Fortunately, it seems that LessWrong generally agrees; both my suggested norms and Robbie's suggested norms were substantially more popular than either of Zack's weirdly impassioned defenses-of-being-a-jerk. I guess I don't know what you mean by "neutral norms" if you don't mean either "the norms Duncan's proposing, that are in line with what Julia Galef and Scott Alexander and Dan Keys and Eric Rogstad and Oliver Habryka and Vaniver and so on and so forth would do by default," or "the norms Zack is proposing, in which you act like a dick and defend it by saying 'it's important to me that I be able to speak plainly and directly.'"
2Said Achmiz1y
I endorse this observation.
2[DEACTIVATED] Duncan Sabien1y
No, I'm not saying Zack and Said should stop playing the game, I'm saying they should stop being sanctimonious about their inability to do what the vast majority of people have a pretty easy time doing ("checking interpretations" and "sharing any of the interpretive labor at all", respectively). I would be surprised to hear you claim that the valid critical points that Zack and Said make are contingent on them continuing to do the shitty things of (respectively) leaping to conclusions about A definitely implying B, or refusing to believe that A implies A until someone logically proves A→A. The times I've seen Zack and Said being useful or perceptive were when they weren't doing these useless and unproductive moves, but rather just saying what they thought. When Zack says what he thinks, instead of going "hey, everybody, look how abhorrent my strawman of Rob's position is!" and trying to trick everyone into thinking that was Rob's position and that he is the sole bastion of epistemic virtue holding back the tides of evil, it's often useful. When Said says what he thinks, instead of demanding that people rigorously define "sky," "blue," and "is" before allowing the conversation to move on from the premise "the sky is blue today," it's often useful. There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?"  He could've even gone off and drafted his post while waiting on an answer; it needn't have even delayed his longer rant, if Rob failed to reply. Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit, and I think Zack acts like it is. If you're that hostile to your fellow LWers, then I think you are making a mistake being here.

There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?" [...] Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit

It's not that I think refusing to employ the bare minimum of social grace is a virtue. It's that I wasn't aware—in fact, am still not aware—that confirming interpretations with the original author before publishing a critical essay constitutes the bare minimum of social grace. The idea that it's somehow bad behavior for intellectuals to publish essays about other intellectuals' essays without checking with the original author first is something I've never heard before; I think unilaterally publishing critical essays is a completely normal thing that intellectuals do all the time, and I see no particular reason for self-identified "rationalist" intellectuals to behave any differently.

For an arbitrary example from our local subculture, Yudkowsky once wrote "A Reply to Francois Chollet" criticizing Chollet's essay on the purported impossibility of an intelligence explosion. Did Yudkowsky first write an email to Chollet saying ... (read more)

… inability to do what the vast majority of people have a pretty easy time doing (“checking interpretations” and “sharing any of the interpretive labor at all”, respectively).

My objection to this sort of claim is basically the same as my objection to this, from an earlier comment of yours:

[Interacting with Said] has never once felt cooperative or collaborative; I can make twice the intellectual progress with half the effort with a randomly selected LWer

And similar to my objection in a much earlier discussion (which I can’t seem to find now, apologies) about Double Crux (I think), wherein (I am summarizing from memory) you said that you have usually been able to easily explain and apply the concept when teaching it to people in person, as a CFAR instructor; to which I asked how you could distinguish between your interlocutor/student really understanding you, vs. the social pressure of the situation (the student/teacher frame, your personal charisma, etc.) causing them, perhaps, to persuade themselves that they’ve understood, when in fact they have not.

In short, the problem is this:

If “sharing interpretive labor”, “making intellectual progress”, etc., just boils down to “agree... (read more)

6Said Achmiz1y
I object to this characterization, which is inaccurate and tendentious.

That's not what I meant. I affirm Vaniver's interpretation ("Zack's worry is that [...] establishing the rule with user-chosen values [...] will mean there's nothing stopping someone from deciding that criticism has to be above 8 and below 6").

(In my culture, it's important that I say "That's not what I meant" rather than "That's a strawman", because the former is agnostic about who is "at fault". In my culture, there's a much stronger duty on writers to write clearly than there is on readers to maintain uncertainty about the author's intent; if I'm unhappy that the text I wrote led someone to jump to the wrong conclusion, I more often think that I should have written better text, rather than that the reader shouldn't have jumped.)

Another attempt to explain the concern (if Vaniver's "above 8 and below 6" remark wasn't sufficient): suppose there were a dishonest author named Mallory, who never, ever admitted she was wrong, even when she was obviously wrong. How can Less Wrong protect against Mallory polluting our shared map with bad ideas?

My preferred solution (it's not perfect, but it's the best I have) is to have a culture that values unilateral criticism and many-to-many discours... (read more)

3[DEACTIVATED] Duncan Sabien1y
This is ignoring the fact that you're highly skilled at deluding and confusing your audience into thinking that what the original author wrote was X, when they actually wrote a much less stupid or much less bad Y. (e.g. repeatedly asserting that Y is tantamount to X and underplaying or outright ignoring the ways in which Y is not X; if you vehemently shout "Carthage delenda est" enough times people do indeed start becoming more and more afraid of Carthage regardless of whether or nor this is justified.) You basically extort effort from people, with your long-winded bad takes, leaving the author with a choice between: a) allowing your demagoguery to take over everyone's perceptions of their point, now that you've dragged it toward a nearby (usually terrible) attractor, such that even though it said Y everybody's going to subsequently view it through the filter of your X-interpretation, or b) effortfully rebutting every little bit of your flood of usually-motivated-by-antipathy words. Eventually, this becomes exhausting enough that the correct move is to kick Mallory out of the garden, where they do not belong and are making everything worse far disproportionate to their contribution. Mallory can go write their rebuttals in any of the other ten thousand places on the internet that aren't specifically trying to collaborate on clear thinking, clear communication, and truth-seeking. The garden of LessWrong is not particularly well-kept, though.
[-]dxu1y238

This is ignoring the fact that you're highly skilled at deluding and confusing your audience into thinking that what the original author wrote was X, when they actually wrote a much less stupid or much less bad Y.

This does not seem like it should be possible for arbitrary X and Y, and so if Zack manages to pull it off in some cases, it seems likely that those cases are precisely those in which the original post's claims were somewhat fuzzy or ill-characterized—

(not necessarily through the fault of the author! perhaps the subject matter itself is simply fuzzy and hard to characterize!)

—in which case it seems that devoting more cognitive effort (and words) to the topic might be a useful sort of thing to do, in general? I don't think one needs to resort to a hypothesis of active malice or antipathy to explain this effect; I think people writing about confusing things is generally a good thing (and if that writing ends up being highly upvoted, I'm generally suspicious of explanations like "the author is really, really good at confusing people" when "the subject itself was confusing to begin with" seems like a strictly simpler explanation).

9Zack_M_Davis1y
(Considering the general problem of how forum moderation should work, rather than my specific guilt or innocence in the dispute at hand) I think positing non-truth-tracking motivations (which can be more general than "malice or antipathy") makes sense, and that there is a real problem here: namely, that what I called "the culture of unilateral criticism and many-to-many discourse" in the great-grandparent grants a structural advantage to people who have more time to burn arguing on the internet, analogously to how adversarial court systems grant a structural advantage to litigants who can afford a better lawyer. Unfortunately, I just don't see any solutions to this problem that don't themselves have much more serious problems? Realistically, I think just letting the debate or trial process play out (including the motivated efforts of slick commenters or lawyers) results in better shared maps than trusting a benevolent moderator or judge to decide who deserves to speak. To the extent that Less Wrong has the potential to do better than other forums, I think it's because our culture and userbase is analogous to a court with a savvier, more intelligent jury (that requires lawyers to make solid arguments, rather than just appealing to their prejudices), not because we've moved beyond the need for non-collaborative debate (even though idealized Bayesian reasoners would not need to debate).
2[DEACTIVATED] Duncan Sabien1y
(It's not a hypothesis; Zack makes his antipathy in these cases fairly explicit, e.g. "this is the egregore I'm fighting against tooth and nail" or similar. Generally speaking, I have not found Zack's writing to be confusion-inducing when it's not coming from his being triggered or angry or defensive or what-have-you.)
2[DEACTIVATED] Duncan Sabien1y
Separately: I'm having a real hard time finding a coherently principled position that says "that's a strawman" is off-limits because it's too accusatory and reads too much into the mind of the author, but is fine with "this is insane."

Thanks (strong-upvoted), this is a pretty good psychoanalysis of me; I really appreciate it. I have some thoughts about it which I will explain in the remainder of this comment, but I wouldn't particularly expect you to read or reply to it unless it's interesting to you; I agree that it makes sense for you to not expend patience and effort on people you don't think are worth it.

fending off attempts to validate or encode any kind of standard or minimum bar of politeness [...] trauma-response type overreaction. [...] two separate essays

Given that my traumatic history makes me extremely wary that attempts to validate or encode any kind of standard or minimum bar of politeness will in practice be weaponized to shut down intellectually substantive discussions, I think it makes sense for me to write critical essays in response to such attempts? It's true that someone without my traumatic history probably wouldn't have thought of the particular arguments I did. But having thought of the arguments, they seemed like a legitimate response to the text that was published.

The reason this sits okay with my conscience is because I think I apply it symmetrically. If someone else's traumatic hi... (read more)

6[DEACTIVATED] Duncan Sabien1y
The problem is, you are an extremely untrustworthy judge of the difference between things being crazy in the actual territory versus them being crazy in your weird skewed triggered perceptions, and you should know this about yourself. I agree 100% that sometimes things are crazy, and that when they are crazy it's right and proper to label them as such. "This is crazy" and "this seems crazy to me" are different statements, with different levels of confidence attached, just as "you are lying" and "it seems like you're lying" are different statements.  This is how words work, in practice; if you expose similar populations to "X is lying" and "X seems like they're lying" the two populations will come away with reliably different impressions. Your speech, though, erodes and invalidates this distinction; you say "X is crazy" when the actual claim you're justified to make is "X seems crazy to me."  You are sufficiently blind to the distinction that you even think that me saying "treat these statements differently" is me generically trying to forbid you from saying one of them. I'm not asking you to stop saying true things, I'm asking you to stop lying, where by lying I mean making statements that are conveniently overconfident.  When you shot from the hip with your "this is insane" comment at me, you were lying, or at the very least culpably negligent and failing to live up to local epistemic hygiene norms. "This sounds crazy to me" would have been true.
4Raemon1y
Speaking somewhat in my mod voice, I do basically also want to say "yes, Zack, I also would like you to stop lying by exaggeration/overconfidence". My hesitation about speaking-in-mod voice is that I don't think it's "overconfidence as deceit" has really graduated to site norm (I know other LW team members who expressly don't agree with it, or have qualms about it). I think I feel kinda okay applying some amount of moderator force behind it, but not enough to attach a particular warning of moderator action at this point. (I don't endorse Duncan's entire frame here, and I think I don't endorse the amount of upset he is. I honestly think this thread has a number of good points on both sides which I don't expect Duncan to agree (much?) with right now. But, when evaluating this complaint at Zack-in-particular I do think Zack should acknowledge his judgment here has not been good and the result is not living up to the standards that flow fairly naturally from the sequences)
8Said Achmiz1y
Er, sorry, can you clarify—what, exactly, has Zack said that constitutes “lying by exaggeration/overconfidence”? Is it just that one “this is insane” comment, or are we talking about something else…?
6Raemon1y
Thinking a bit more, while I do have at least one more example of Zack doing this thing in mind, and am fairly confident I would find more (and think they are add up to being bad), I'm not confident that if I were writing this comment for myself without replying to Duncan, I'd have ended up wording the notice the same way (which in this case I think was fairly overshadowed by Duncan's specific critique). I'm fairly confident there are a collection of behaviors that add up to something Zack's stated values should consider a persistent problem, but not sure I have a lot of examples of any-particular-pattern that I can easily articulate offhand. I do think Zack fairly frequently does a "Write a reply to a person's post as if it's a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about without engaging with what the original author was really talking about." In this particular post, I think there's a particular sleight-of-hand about word definitions I can point to as feeling particularly misleading. In Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think, I don't think there's a concrete thing that's deceptive, but something about it does feel slightly off.

while I do have at least one more example of Zack doing this thing in mind

Did you mean to link to this comment? Or another of his comments on that post…? It is not clear to me, on a skim of the comments, which specific thing that Zack wrote there might be an example of “lying by exaggeration/overconfidence” (but I could easily have missed it; there’s a good number of comments on that post).

I do think Zack fairly frequently does a “Write a reply to a person’s post as if it’s a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about without engaging with what the original author was really talking about.”

Hmm. Certainly the first part of that is true, but I’m not convinced of the second part (“without engaging with what the original author was really talking about”). For example, you mention the post “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think”. I found that said post expressed objections and thoughts that I had when reading Eliezer’s “Meta-Honesty” post, so it seems strange to say that Zack’s post didn’t engage with what Eliezer wrote! (Unless you take the view that what ... (read more)

2Zack_M_Davis1y
Would it help if we distinguished between a "reply" (in which a commentator explains the thoughts that they had in reaction to a post, often critical or otherwise negative thoughts) and a "rebuttal" (in which the commentator directly contradicts the original post, such that the original post and the rebuttal can't "both be right")? I often write replies that are not rebuttals, but I think this is fine.

Everyone sometimes issues replies that are not rebuttals, but there is an expectation that replies will meet some threshold of relevance. Injecting "your comment reminds me of the medieval poet Dante Alighieri" into a random conversation would generally be considered off-topic, even if the speaker genuinely was reminded of him. Other participants in the conversation might suspect this speaker of being obsessed with Alighieri, and they might worry that he was trying to subvert the conversation by changing it to a topic no one but him was interested in. They might think-but-be-too-polite-to-say "Dude, no one cares, stop distracting from the topic at hand".

The behaviour Raemon was trying to highlight is that you soapbox. If it is line with your values to do so, it still seems like choosing to defect rather than cooperate in the game of conversation.

2Zack_M_Davis1y
I mean, I agree that I have soapbox-like tendencies (I often have an agenda, and my contributions to our discourse often reflect my agenda), but I thought I've been meeting the commonsense relevance standard—being an Alighieri scholar who only brings it up when there happens to be a legitimate Alighieri angle on the topic, and not just randomly derailing other people's discussions. I could be persuaded that I've been getting this wrong, but, again, I'm going to need more specific examples (of how some particular post I made misses the relevance standard) before I repent or change anything.
5philh1y
We might distinguish between * Reaction: I read your post and these are the thoughts it generated in me * Reply: ...and these thoughts seem relevant to what the post was talking about * Rebuttal: ...and they contradict what you said. I've sometimes received comments where I'd have found it helpful to know which of these was intended. (Of course a single comment can be all of these in different places. Also a reaction should still not misrepresent the original post.)
7Zack_M_Davis1y
Sorry, I'm going to need more specific examples of me allegedly "lying by exaggeration/overconfidence" before I acknowledge such a thing. I'm eager to admit my mistakes, when I've been persuaded that I've made a mistake. If we're talking specifically about my 4 December 2021 comment that started with "This is insane", I agree that it was a very bad comment that I regret very much. If we're talking about a more general tendency to "lie by exaggeration/overconfidence", I'm not persuaded yet. (I have more thoughts about things people have said in this thread, but they'll be delayed a few days, partially because I have other things to do, and partially because I'm curious to see whether Duncan will accept my new apology for the "This is insane" comment.)
2Raemon1y
The previous example I had onhand was in a private conversation where you described someone as "blatantly lying" (you're anonymized in the linked post), and we argued a bit and (I recall) you eventually agreeing that 'blatantly lying' was not an accurate characterization of 'not-particularly-blatantly-rationalizing' (even if there was something really important about that rationalizing that people should notice).  I think I recall you using pretty similar phrasing a couple weeks later, which seemed like there was something sticky about your process that generated the objection in the first place. I don't remember this second part very clearly though. (I agree this is probably still not enough examples for you to update strongly at the moment if you're going entirely off my stated examples, and they don't trigger an 'oh yeah' feeling that prompts you to notice more examples on your own)
5Zack_M_Davis1y
I think it's significant that the "blantant lying" example was an in-person conversation, rather than a published blog post. I think I'm much more prone to exaggerate in real-time conversations (especially emotionally-heated conversations) than I am in published writing that I have time to edit.
2Raemon1y
Yeah I do agree with that.
2[DEACTIVATED] Duncan Sabien1y
Here's one imo
4[DEACTIVATED] Duncan Sabien1y
(I'm not sure I quite endorse my level of anger either, but there really is something quite rich about the combination of: * Zack having been so cavalier and rude that I blocked him because he was singlehandedly making LessWrong a miserable place to be, and making "publishing an essay" feel like touching an electric fence * Zack then strawmanning exactly the part of my post that points out "hey, it's nice when people don't do that" * Zack, rather than just making his arguments on their own merit, and pointing out the goodness of good things and the badness of bad things, instead painting a caricature of me (and later Rob) as the opposition and thus inextricably tying his stuff to mine/making it impossible to just get away from him * (I do in fact think that his post anchored and lodestoned people toward his interpretation of that guideline; I recall you saying that after you read his summary/description you nodded and said to yourself "seems about right" but I'd bet $100 to somebody's $1 that if we had a time machine you wouldn't have produced that interpretation on your own; I think you got verbal overshadow'd into it and I think Zack optimizes his writing to verbally overshadow people/cast a spell of confusion in this way; he often relentlessly says "X is Y" in a dozen different ways in his pieces until people lose track of the ways in which X is not Y.) * (Which confusion Zack then magnanimously welcomed me to burn hours of my life laboriously cleaning up.) * Zack then being really smug about how he'd never block anybody and how he'd never try to force anybody to change (never mind that I tried to insulate myself from him in lieu of forcing him to change, and would've happily let him be shitty off in his own shitty corner forever) ... it really is quite infuriating.  I don't know a better term for it than "rich;" it seems to be a central example of the sort of thing people mean when they say "that's rich.")
2Zack_M_Davis1y
I agree that it often makes sense to write "This seems X to me" rather than "This is X" to indicate uncertainty or that the people I'm talking to are likely to disagree. Thanks for clarifying that you're not generically trying to forbid me from saying one of them. I appreciate it. Yes, I again agree that that was a bad comment on my part, which I regret. (Thanks to Vaniver for feedback on an earlier draft of this comment.)
2jimmy1y
I guess I meant "as it applies here, specifically", given that Zack was already criticizing himself for that specific thing, and arguing for rather than against politeness norms in the specific place that I commented. I'm aware that you guys haven't been getting along too well and wouldn't expect agreement more generally, though I hadn't been following closely. It looks like you put some work and emotional energy into this comment so I don't want to just not respond, but it also seems like this whole thing is upsetting enough that you don't really want to be having these discussions. I'm going to err on the side of not getting into any object level response that you might not want, but if you want to know how to get along with Zach and not find it infuriating I think I do understand his perspective (having found myself in similar shoes) well enough to explain how you can do it.
-3Said Achmiz1y
Isn’t it, though? Probabilistically speaking, I mean. Usually, when people say such things (“you should consider updating your overall language heuristics”, etc.) to you, they are in fact your enemies, and the game-theoretically correct response is disproportionate hostility. Now, that’s “usually”, and not “always”; and such things are in any case a matter of degree; and there are different classes of “enemies”; and “disproportionate hostility” may have various downsides, dictated by circumstances; and there are other caveats besides. But, at the very least, you cannot truthfully claim that the all-caps sort of hostile response is entirely irrational in such cases—that it can only be caused by “a trauma-response type overreaction” (or something similar).
-16[DEACTIVATED] Duncan Sabien1y
-7Said Achmiz1y
4Zack_M_Davis1y
I don't think Duncan and I are in clear agreement more generally (even if we agree that the particular comment I wrote that caused Duncan to give up on me was in fact a bad comment). Here's my quick attempt to pass Duncan's Ideological Turing Test on what our feud is about: "one of the most important obstacles to having a culture of clear thinking and clear communication is the tendency for interlocutors to misrepresent one another, to jump to conclusions about what the other person is saying, and lash out at that strawman, instead of appropriately maintaining uncertainty, or split-and-committing pending further evidence. These skills are a prerequisite for being able to have a sane discussion. Empirically, Zack doesn't seem to care about these skills much, if at all. As a result, his presence makes the discussion spaces he's in worse." (I probably didn't pass, but I tried.) My response to my-attempt-to-pass-Duncan's-ITT (which probably didn't succeed at capturing Duncan's real views) is that I strongly disagree that pro-actively modeling one's interlocutors should be a prerequisite for being able to have a discussion. As an author, it's often frustrating when critics don't understand my words the way I hoped they would, but ultimately, I think it's my responsibility to try to produce text that stands up to scrutiny. I would never tell a critic that they're not passing my ITT, because in my view, passing my ITT isn't their job; their job is to offer their real thoughts on the actual text I actually published. I don't accuse critics of strawmanning unless I expect to be able to convince third parties with an explanation of how the text the critic published substantively misrepresents the text I published. I'm extremely wary that a culture that heavily penalizes not-sufficiently-modeling-one's-interlocutor, interferes with the process of subjecting each other's work to scrutiny. Again, that's my interpretation of what the feud is about. I'm not claiming to have ac

Yeah, I didn't mean that I thought you two agreed in general, just on the specific thing he was commenting on. I didn't mean to insert myself into this feud and I was kinda asking how I got here, but now that I'm here we might as well have fun with it. I think I have a pretty good feel for where you're coming from, and actually agree with a lot of it. However, agreement isn't where the fun is so I'm gonna push back where I see you as screwing up and you can let me know if it doesn't fit.

These two lines stand out to me as carrying all the weight:

I strongly disagree that pro-actively modeling one's interlocutors should be a prerequisite for being able to have a discussion.

I'm extremely wary that a culture that heavily penalizes not-sufficiently-modeling-one's-interlocutor, interferes with the process of subjecting each other's work to scrutiny.

These two lines seem to go hand in hand in your mind, but my initial response to the two is very different.

To the latter, I simply agree that there's a failure mode there and don't fault you for being extremely wary of it. To the former though.... "I disagree that this thing should be necessary" is kinda a "Tough?". Either it's necessary or it ... (read more)

7Said Achmiz1y
It’s not clear to me that there’s any more “lack of epistemic hygeine” in Zack’s posts than in anyone else’s, yours (and mine) included. If the claim here is that Zack exhibits significantly less epistemic hygeine than… who? You? The average Less Wrong commenter? Either way, it does not seem plausible to me. In most of the cases where you’ve claimed him to be something like “loudly + overconfidently + rudely wrong”, it has always seemed to me that, at best, there was nothing more than a case of “reasonable people might disagree on this one”. Do you disagree with this characterization?
[-]Ben1y122

Some people want motors that are efficient, high-power or similar. Some people might instead be making a kinetic sculpture out of lego and they actually are primarily interested in whether the motor's cycle looks psychedelic and it makes a pleasing noise. Neither group are wrong. 

Some people want arguments that lead efficiently to a better view of the base reality. Some people are more interested in understanding the opposing side's philosophy and how they reason about it. Some people want the argument to be engaging, fun or dramatic. Some people prioritise still being friends when the argument is over. Some people like the idea of 'winning' an argument like winning a game. None of them are wrong. 

Putting a label on the people who actually want their motors to move energy efficiently (calling them "physicsts" or "engineers") and contrasting them with "artists" or something might be a useful line to draw. Similarly, "rationalist discourse" might be a poor label, but if it is was called "truth seeking discussion" or similar I think it is actually carving our a fairly specific sub-part of the possible space.

But this seems to contradict the element of Non-Deception. If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?

This is a good question!! Note that in the original footnote in my post, "on the same side" is a hyperlink going to a comment by Val:

"Some version of civility and/or friendliness and/or a spirit of camaraderie and goodwill seems like a useful ingredient in many discussions. I'm not sure how best to achieve this in ways that are emotionally honest ('pretending to be cheerful and warm when you don't feel that way' sounds like the wrong move to me), or how to achieve this without steering away from candor, openness, 'realness', etc."

I think the core thing here is same-sidedness.

That has nothing to do directly with being friendly/civil/etc., although it'll probably naturally result in friendliness/etc.

(Like you seem to, I think aiming for cheerfulness/warmth/etc. is rather a bad idea.)

If you & I are arguing but there's a common-knowledge undercurrent of same-sidedness, then even impassioned and cutting remarks are pretty easy to take in stride. "No, you're being stupid

... (read more)

Note that in the original footnote in my post, "on the same side" is a hyperlink going to a comment by Val

Thanks for pointing this out. (I read Val's comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.

the goal isn't to trick people into thinking your disagreements are small, it's to make typical disagreements feel less like battles between warring armies

I think the fact that disagreements often feel like battles between warring armies is because a lot of apparent "disagreements" are usefully modeled as disguised conflicts. That is, my theory about why predictable disagreements are so ubiquitous in human life (despite the fact that Bayesian reasoners can't forsee to disagree) is mostly conflict-theoretic rather than mistake-theoretic.

A simple example: I stole a loaf of bread. A policeman thinks I stole the bread. I claim that I didn't steal the bread. Superficially, this looks like a "disagreement" to an outside observer noticing the two of us reporting different beliefs, but what's actually going on is that I'm lying. Importantly, if I care more about no... (read more)

5Said Achmiz1y
Very strongly seconding this. (I have noticed this pattern on Less Wrong in the past, in fact, and more than once. It is no idle worry, but a very real thing that already happens.)

Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:

  • measure twice before you cut the steel
  • Double check your fittings before you test the engine
  • keep track of which direction the axle is supposed to be turning for the type of engine you are making
  • etc.

The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working

The point of "rationalist guidelines" isn't to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are. 

4Said Achmiz1y
Well, for one thing, we might reasonably ask whether these guidelines (or anything sufficiently similar to these guidelines to identifiably be “the same idea”, and not just “generic stuff that many other people have said before”) are, in fact, needed in order for a group of people to “stay connected to reality at all”. Indeed we might go further and ask whether these guidelines do, in fact, help a group of people “stay connected to reality at all”. In other words, you say: “The guidelines are for helping people avoid [consistent, predictable mistakes]” (emphasis mine). Yes, the guidelines are “for” that—in the sense that they are intended to fulfill the stated function. But are the guidelines good for that purpose? It’s an open question, surely! And it’s one that merely asserting the guidelines’ intent does not do much to answer. But, perhaps even more importantly, we might, even more reasonably, ask whether any particular guideline is any good for helping a group of people “stay connected to reality at all”. Surely we can imagine a scenario where some of the guidelines are good for that, but others of the guidelines are’t—yes? Indeed, it’s not out of the question that some of the guidelines are good for that purpose, but others of the guidelines are actively bad for it! Surely we can’t reject that possibility a priori, simply because the guidelines are merely labeled “guidelines for rationalist discourse, which are necessary in order to avoid consistent, predictable mistakes, and stay connected to reality at all”—right?
3Drake Morrison1y
I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline? If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.  An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures are explicit. There's less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code.  The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way.  People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can't, then the conversation doesn't go anywhere on that topic.  I don't see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things. 
2Said Achmiz1y
Yes, sure, we shouldn’t throw away the concept; but that’s not at all a reason to start with the presumption that these particular guidelines are any good! As far as examples go… well, quite frankly, that’s what the OP is all about, right? Apologies, but I am deliberately not responding to this analogy and inferences from it, because adding an argument about programming languages to this discussion seems like the diametric opposite of productive.

That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements?

Well, the paper says disagreement is only unpredictable between agents with the same priors, so seems like that explains at least part of this?

Debate is also inefficient: for example, if the "defense" in the court variant happens to find evidence or arguments that would benefit the "prosecution", the defense has no incentive to report it to the court, and there's no guarantee that the prosecution will independently find it themselves.

Reporting such evidence will make you exceptional among people who typically hold the defense position; it will no longer be fair for people to say of them "well of course the defense would say that either way". And while you may care very much about the conclusion o... (read more)

I think as usual with rationality stuff there's a good analogy to statistics.

I'm very happy I never took Stats 101 and learned what a p value was in a math department "Theory of Statistics" class. Because as I understood it, Stats 101 teaches recipes, rules for when a conclusion is allowed. In the math department, I instead learned properties of algorithms for estimation and decision. There's a certain interesting property of an estimation algorithm for the size of an effect: how large will that estimate be, if the effect is not there? Of a decision rule, ... (read more)

As a relatively new person to lesswrong, I agree.

The number of conversations which I've read which end in either party noticeably updating one way or the other have been relatively rare. The one point I'm not sure if I agree with is being able to predict a particular disagreement is a problem?

I suppose being able to predict the exact way in which your interlocutors will disagree is the problem? If you can foresee someone disagreeing in a particular way, and then accounting for it in your argument, and then they disagree anyway, in the exact way you tried to address, that's generally just bad faith.

(though sometimes I do skim posts, by god)

1jimmy1y
Introducing "arguments" and "bad faith" can complicate and confuse things, and neither are necessary. As a simple model, say we're predicting whether the next ball drawn from an urn is black, and we've each seen our own set of draws. When I learn that your initial prediction is a higher probability than mine, I can infer that you've seen a higher ratio of black than I have, so in order to take that into account I should increase my own probability of black. But how much? Maybe I don't know how many draws you've witnessed. On the next iteration, maybe they say "Oh shoot, you said 30%? In that case I'm going to drop my guess from 95% to 35%". In that case, they're telling you that they expect you've seen many more draws than them. Alternatively, they could say "I guess I'll update from 95% to 94%", telling you the opposite. If you knew in advance which side of your new estimate they were likely to end up on, then you could have taken that into account last time, and updated further/less far accordingly until you can't expect to know what you will learn next time. If you *know* that they're going to stick to 95% and not update based on your guess, then you know they don't view your beliefs as saying much. If *that* doesn't change your mind and make you think "Wow, they must really know the answer then!" and update to 95%, then you don't view their beliefs as saying much either. When you can predict that beliefs won't update towards convergence, you're predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
1SomeoneYouOnceKnew1y
  Are you saying that the interlocutors should instead change to attempting to resolve their lack of mutual respect? 
5jimmy1y
Whether it's worth working to resolve any disagreement over appropriate levels of respect is going to depend on the context, but certainly below a certain threshold object level discourse becomes predictably futile. And certainly high levels of respect are *really nice*, and allow for much more efficient communication because people are actually taking each other seriously and engaging with each other's perspective. There's definitely important caveats, but I generally agree with the idea that mutual respect and the ability to sort out disagreements about the appropriate level of respect are worth deliberately cultivating. Certainly if I am in a disagreement that I'd like to actually resolve and I'm not being taken as seriously as I think I ought to be, I'm going to seek to understand why, and see if I can't pass their "ideological test" on the matter.

Just noting that this entire post is an overt strawman; its title and central thesis rest on the exactly backward implication that both Rob's and my posts were based on ungrounded theory when they were both built entirely out of studying and attempting to model what actually works in practice, i.e. what are the observable behaviors of people who actually-in-practice consistently and reliably produce both a) clear thinking and b) effective communication of that clear thinking, in a way that is relatively domain-agnostic. In the analogy of "physicists" vs. "... (read more)

[the post’s] title and central thesis rest on the exactly backward implication that both Rob’s and my posts were based on ungrounded theory

Er, where does the OP say this…? I see no such implication. (Indeed, if anything, the OP seems to be saying that the posts in question are based on, so to speak, un-theory’d ground…)

Rob’s and my posts … were both built entirely out of studying and attempting to model what actually works in practice, i.e. what are the observable behaviors of people who actually-in-practice consistently and reliably produce both a) clear thinking and b) effective communication of that clear thinking, in a way that is relatively domain-agnostic.

Well… sure, you can say that. But then… anyone could say that, right? I could write a post that recommended the opposite of any given thing you recommend (e.g., “cultivating an adversarial attitude is good, while cultivating a cooperative attitude leads to worse outcomes”), and I could also claim that this recommendation was “built entirely out of studying and attempting to model what actually works in practice”. And then what would we have? Two competing claims, both backed up by exactly the same thing (i.e., nothing... (read more)

4Yoav Ravid1y
There's a question of whether there really is disagreement. If there isn't, then we can both trust that Duncan and Rob really based their guidelines on their experience (which we might also especially appreciate), and notice that it fits our own experience. If there's disagreement then it's indeed time to go beyond saying "it's grounded in experience" and exchange further information.
4Said Achmiz1y
Well, I certainly disagree! So, yes, there is disagreement.
2Yoav Ravid1y
Ok then. I'm glad the last two paragraphs weren't just hypothetical for the sake of devil advocacy.
3[DEACTIVATED] Duncan Sabien1y
Literally the title (and then the first few paragraphs).
0Said Achmiz1y
I don’t understand. How does the title imply this? How do the first few paragraphs? I suppose you could read this implication into the title and introduction, if you were so inclined…? I didn’t, however. I don’t think your claim that “[the post’s] title and central thesis rest on” this implication is well-grounded in what the OP actually says.
1Zack_M_Davis1y
Thanks for commenting! I don't think the typical reader would interpret the title and opening paragraphs as claiming that you and Rob haven't tried to study and model what works in practice? My intent was to play off an analogy Yudkowsky made between "rational-ists" (those who study rationality) and "physics-ists" (those who study physics). I'm saying that I don't want the study of rationality itself as a subject matter to be conflated with any particular set of discourse norms, because I think different discourse norms have different use-cases, much like how different motor designs or martial arts have different use-cases. That's totally compatible with you and Rob having put a lot of work into studying and modeling what actually works in practice! The martial arts analogy seems apt: if I point out that different styles of martial arts exist, I'm not saying that some particular karate master (whose post happened to inspire mine) hasn't tried to study what works. I'm saying that ju-jitsu, boxing, tai quan dao, &c. also exist. The subject matter of "fighting" is bigger than what any one karate master knows. (We might have a substantive disagreement about this, if you don't think a competing school of "rationalists" could have grounds to contest your guidelines?) If you think it would help readers not be misled by my mistakes, feel free to point out the other flaws, too! Writing a comment under my post doesn't put you under any obligation to engage with me.

Hmm, when there is a disagreement somewhere, it is worth going back to first principles, isn't it?

If I remember correctly, Eliezer's motivations for starting the whole series of posts back on overcoming bias was "raising the sanity waterline" or something like that. Basically, realizing that you are an imperfect reasoner and striving to see your reasoning flaws and do better. This is an uphill battle, humans did not evolve to reason well at all, and different people have different classes of flaws, some are too combative, some are too accepting, the list i... (read more)

I've personally gotten the most out of people displaying epistemic technique in investigating their own problems so that I have existence proofs for all the myriad spot checks it's possible to run on one's own reasoning.

If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?


Because there's ambiguity, and there's self-fulfilling prophecies. When there's potential for self-fulfilling prophecies, there's free variable that's not a purely epistemic question; e.g. "Are we on the same side?". E.g., giving any answer to that question is in some cases implicitly deciding to add your weight to the existence of a conflict.

You role-play to add some driving force to the system--driving t... (read more)

Then it would appear that we're in a conflict over a shared resource: I want to post "Zack-type" things on Less Wrong—including long-form criticism of other posts on Less Wrong—and (assuming I'm reading your comment correctly; feel free to correct me if not) it seems like you want me to not do that.

It looks like we can't both get what we want at the same time. That's a very unfortunate situation for us to be in. If you have any suggestions for Pareto improvements, I'm listening. I'm not sure what else I can say.

A distant relative of mine (I assume, the name is rare), Dr Harold Aspden, by all accounts a well-respected and successful engineer, spent the latter part of his life advocating an 'over-unity motor'. 

There are quite a lot of people who think that you can use a system of mirrors to concentrate sunlight in order to achieve temperatures higher than the surface of the sun. I myself am not sufficiently confident that this is impossible to actually be seriously surprised if someone works out a way to do it. 

I think 'non-physicist motors' are a thing. 

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

if there is any way of fixing this mess, it's going to involve clarifying conflicts rather than obfuscating them

This immediately brought to mind John Nerst's erisology. I've been paying attention to it for a while, but I don't see it much here (speaking as a decade-long lurker); I wonder why.

1Frederic Janssens1y
Thanks for the pointer. John Nerst's approach is similar to mine. The way I would formulate it here : De facto, people have different priors. If there is a debate/discussion, the most fruitful result would come by construing, in common if possible, a more encompassing reference frame, where both sets of priors can be expressed to their respective satisfaction. It is not easy. Some priors will be incompatible as such. A real dialogue supposes a readiness to examine ones priors and eventually adjust them to be less restrictive. A static defense of one's priors is mostly a waste of time (or a show). Caveat : bad faith exists, people, and groups, have vulnerabilities they will protect. So a  real dialogue is not always possible, or only very partially. The idea is to at least try.

How, specifically, are rough-and-tumble spaces less "rational", more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?

You may be right that this one sticks out and hasn't been abstracted properly. But I do think there are truth-tracking reasons for this that are pretty general. (I think whether these reasons actually hold water is pretty dubious; rough-and-tumble spaces would very plausibly be significantly more truth-tracking than current rationalist norms; I'm ju... (read more)

doesn't stick after I'm not talking to them anymore

"Aim for long-run mental engineering / truth-tracking information processing, not short term appearance of rule-following", or some better version, seems like an important element of truth-tracking discourse. 

This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it. 

You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas? 

When you talk about making a new faction - that is what this place is. And naming it Rationalists says something ver... (read more)

1SomeoneYouOnceKnew1y
Do you believe encouraging the site maintainers to implement degamification techniques on the site would help with your criticisms?