The current version of the How to Run a Successful Less Wrong Meetup booklet contains descriptions about various games and activities. The problem is, some of these descriptions are quite short and don't really inspire people to try them out. I've been asked to make those descriptions sound like more fun, but for some reason I have difficulty doing so. At first, I thought it was just because I hadn't tried most of those exercises myself, and it felt dishonest to try to make something sound fun if I didn't know to what extent it actually was fun. But then I realized that I also couldn't come up with anything good for Zendo, which is a game that I've played and which I've liked. So I'm kinda stumped as to what the reason is.

But if you can't solve a problem, outsource it! I'm posting some excerpts from the most boring-sounding activities in the comments below, and I'd like people to reply to those comments and come up with exciting-sounding descriptions for them. Something in the style of the Biased Pandemic writeup (which I quoted liberally in the booklet) might be ideal, but other styles are cool, too.

Thanks in advance!

New Comment
12 comments, sorted by Click to highlight new comments since:

Rationalization Game

What does it feel like when you’re rationalizing? In the rationalization game, one is given an opinion that they consider false, and told to try to come up with clever reasons for why it's true. The more clever and persuasive the arguments, the better the score.

Cause and Belief

In this game, people mention different beliefs of theirs. The beliefs don’t necessarily need to be anything controversial: even various ordinary beliefs work. So anything from “I believe global warming is happening” to “I believe I have an intuition about people generally being good” to “I believe my neighbor’s car is red”. Then each person will try to think about various reasons for why they’ve come to have this belief. Note that in the example, “I believe I have an intuition about people generally being good” is a distinct belief from “I believe in people generally being good”. You can either think about why you believe you have such an intuition, or, presuming that you do have that intuition, why you have it.

Feel free to go as many levels deep as you feel is necessary, either for explaining your belief or for keeping the discussion interesting. If you state that you believe in global warming because an expert said so, it’s probably worth also mentioning why you believe in this expert in particular, since it’s a contentious subject. On the other hand, if you state that you believe your neighbor’s car to be red because you’ve seen it and it was red, you may want to elaborate on e.g. how you know it was your neighbor’s car, simply because your answer would be rather boring otherwise.

Try not to challenge each other’s beliefs. The goal is not to have a debate, but to engage in an open-minded deconstruction of the reasons for why you think what you think. By making the game into an exercise where everyone is free to name even the silliest causes for their beliefs, people become more likely to actually adjust beliefs that they notice to be on shaky ground. If somebody asks other people for an opinion on whether their reasoning makes sense, feel free to answer, but do try to keep it non-confrontational.

Fermi Problems

The classic Fermi problem, generally attributed to Fermi,[2] is "How many piano tuners are there in Chicago?" A typical solution to this problem would involve multiplying together a series of estimates that would yield the correct answer if the estimates were correct. For example, we might make the following assumptions:

  • There are approximately 5,000,000 people living in Chicago.
  • On average, there are two persons in each household in Chicago.
  • Roughly one household in twenty has a piano that is tuned regularly.
  • Pianos that are tuned regularly are tuned on average about once per year.
  • It takes a piano tuner about two hours to tune a piano, including travel time.
  • Each piano tuner works eight hours in a day, five days in a week, and 50 weeks in a year.

From these assumptions we can compute that the number of piano tunings in a single year in Chicago is

(5,000,000 persons in Chicago) / (2 persons/household) × (1 piano/20 households) × (1 piano tuning per piano per year) = 125,000 piano tunings per year in Chicago.

We can similarly calculate that the average piano tuner performs

(50 weeks/year)×(5 days/week)×(8 hours/day)/(2 hours to tune a piano) = 1000 piano tunings per year per piano tuner.

Dividing gives

(125,000 piano tunings per year in Chicago) / (1000 piano tunings per year per piano tuner) = 125 piano tuners in Chicago.

-- Wikipedia on Fermi Problems

Fermi calculations are a quick way to come up with rough numerical estimates of various things, and some companies use Fermi problems as interview questions. Coming up with various Fermi problems, breaking them apart into various assumptions and then checking to see whether the calculations were correct can be an entertaining activity for a while. The skill of doing Fermi calculations may come useful, as does a knowledge of how accurate your Fermi calculations tend to be.

Precommit to Updates

Have people state their opinions on various questions, and then ask them how they would update if they encountered certain kinds of evidence. For example, a person who thought that European swallows had a higher airspeed velocity than African swallows might precommit to updating his belief to “undecided” if he was presented with evidence suggesting that African birds were generally faster the European ones. The group would then go looking after such evidence.

Zendo

Zendo is a game where one player picks a rule and creates structures that follow that rule, and the other players try to discover the rule by building their own structures and asking whether those structures follow the rule. It can be used to practice induction and to learn to avoid confirmation bias. See Wikipedia for the exact rules.

[-]maia30

Zendo, also known as 'Science: the game,' ...

Excellent suggestion, thank you! And with a bit of looking, I actually found an essay by someone who uses Zendo to teach the scientific method. I incorporated your suggestion, as well as a brief excerpt from his essay:

Zendo, also known as ‘Science, the game’, involves one player picking rules and creating structures that follow that rule. The other players try to discover the rule by building their own structures and asking whether those structures follow the rule. See Wikipedia for the exact rules.

Traditionally, the names of these roles are ‘Master’ and ‘Students’, but one may also call them ‘Nature’ and ‘Scientists’ - as the players can be thought of as researchers conducting experiments in an attempt to uncover the hidden laws of nature. One author, Nick Bentley, writes that he uses Zendo for teaching the scientific method, and names four issues which relate to the scientific method and also pop up in the game (the whole essay is recommended reading):

Here's the great thing: issues that pop up in real science also emerge in the game. Here are four:

  1. Ambiguous Hypotheses - Sometimes, a Scientist will state an unclear hypothesis. In this case, the universe must ask for clarification to construct a counterexample. This is one of the central problems of real science too: how to construct testable hypotheses? Zendo's a forum in which to practice the kind of precise language needed to do so. Awesome.
  2. Superstitions based on spurious correlations - Sometimes, thanks to the Scientists' experimental choices, a pattern of white and black stones builds up on the table which all conform to an incorrect hypothesis about the law of nature. This is how real Scientists get stuck too. And just like in real science, you get unstuck by finding an experimental counterexample to the incorrect hypothesis, at which point the Scientists undergo a "Paradigm Shift". Paradigm Shifts also happen when new investigators without the usual biases (who can interpret experimental results in a new way) enter the field. For this reason it's said that science proceeds by retirements (the older biased Scientists retire and make way for new and differently-biased ones). In Zendo, the same thing happens when someone who's not even playing walks by the table, glances at the experiments, and points out a hypothesis that the players missed due to group-think. It makes clear the value of fresh perspective and independent thinking.
  3. The value of simple, systematic experimentation - In Zendo, it helps if Scientists do experiments in series, where each experiment differs only slightly from the last. This allows Scientists to quickly pinpoint the variables that matter to the experimental outcome. Scientists also learn to minimize the number of variables in each experiment, to minimize the chance for spurious correlations as described in point 2 above. These are essential practices for real Scientists.
  4. The value of Occam's Razor - Scientists quickly learn how to make their hypotheses as simple as possible, because then it's easy to interpret the counterexamples that disprove them. The more parts a hypothesis has, the harder it is to infer from a counterexample what part is wrong.

These are the fundamentals of the scientific method, and Zendo presents them as no real-life lab exercise ever could, because it presents them free of the distracting technical details of real-life experiments. There's no faster or clearer way to learn them.

Thus, becoming better at Zendo involves becoming better at the basic skills that are used for forming hypotheses about reality. In particular, it teaches one to be wary of confirmation bias, as players quickly realize that even hypotheses which fit all the existing data can easily be wrong. It is also useful in becoming more aware of the illusion of transparency, as it is common for the Nature player to develop a rule which seems easy and obvious to them, but which turns out to be very hard for the players to guess.

One early surprising result of Zendo is that what you think is an "easy" and "obvious" rule is probably illusion of transparency in action.

I think that might be due to the free-form nature of the rules. In #lesswrong, we sometimes have lambdabot in chat, and lambdabot can evaluate (pure) Haskell functions, and also accepts private definitions of functions. So we can and do play Zendo with Haskell functions on integer triplets. Sometimes the functions are really difficult to guess, but no one seems to regard them as 'unfair'.

[-][anonymous]60

Aumann’s Thunderdome

Two people are chosen to debate a specific subject. As they do so, nearby people go online to do fact-checking on the debaters’ claims. If the debaters make mistaken claims, they are informed of this. See if either of the debaters (or one of the audience members) ends up changing their mind.

[This comment is no longer endorsed by its author]Reply

Rejection Therapy

Rejection Therapy is a game where the participants try to be rejected by a person or a group. People can use any social rejections, or buy a deck of cards with various suggestions about how to try to get rejected. For example, you can ask for a discount when buying something at a store, ask someone out on a date, or ask a total stranger to be your friend on Facebook. Wikipedia summarizes some of the rules as:

There is only one official rule to Rejection Therapy, which is to be rejected by another person at least once, every day. There are also stipulations as to what counts as a rejection and what does not:

  1. A rejection counts if you are out of your comfort zone
  2. A rejection counts if your request is denied
  3. At the time of rejection, the player, not the respondent, should be in a position of vulnerability. The player should be sensitive to the feelings of the person being asked.

Over time, players get used to the idea of asking people things, and begin to no longer fear rejection. The game has two official winning conditions: a player getting rejected for 30 consecutive days, and the fear of rejection no longer inhibiting the player.

Behavioral Analysis

Knowledge of cognitive biases and the science of decision-making is supposed to improve one’s life. But there’s a risk where a person who spends a lot of time thinking or talking about something and thinks that it’s having a big impact on their life, when in reality it isn’t. One way to attempt to combat this is by explicitly thinking about times when one has applied those changes, as well as times when one hasn’t, and try to think of both good and bad outcomes. As an exercise, let everyone try to fill in the following table with experiences from their own life, and then discuss the results.

  • Behaved irrationally, bad results: e.g. “I really wanted something to happen so I believed it would, with these bad consequences”
  • Behaved irrationally, good results: e.g. “Wishful thinking motivated me and made me succeed.”
  • Behaved rationally, bad results: e.g. “I started wondering whether I had enough evidence to believe what I did, and became indecisive and lost my chance to act.”
  • Behaved rationally, good results: e.g. “I noticed that I didn’t have the necessary evidence to believe what I did, so I changed my mind and avoided a lot of trouble.”