14

RationalityAI
Frontpage

The conjecture workshop is an activity I’ve run three times now in a conference/meetup-style setting. Feedback has been unusually positive, so I’m writing it up here to codify the basic idea and hopefully get independent feedback from others who try it.

The overall goal is to translate intuitive ideas into mathematical conjectures.

Format

People break into groups of 2-3. Within each group, one person serves as “conjecturer”, and the other one or two serve as “questioners”. Roles can rotate over the course of the allotted time (~1 hr for sessions so far).

To start, the conjecturer should have some intuitive claim in mind that they want to formalize. Some AI-oriented examples from previous sessions:

  • In beneficial comprehensive AI services, security services and planning services necessarily have to be "agenty"
  • There exists some “universal” algorithm which performs about-as-well as any other on optimization problems with bounded runtime
  • Coarse-grained models of the world form a category, and we can always construct pullback/pushout models within that category

Note that these are quite fuzzy and leave out a lot of the idea; fully explaining the ideas intuitively (even without formalization) takes much longer than the short blurbs above. Indeed, the ideas usually start out even fuzzier than the blurbs above - just summarizing them in a sentence is hard.

Once the conjecturer has something in mind, they try to explain the claim to the questioners, intuitively. The questioners’ job is to regularly interrupt, and ask questions to help formalize what the conjecturer is saying. Common examples include:

  • “You’ve mentioned <thing> a couple of times. Should we give it a name?”
  • “What type of thing is <thing we just named>? Any constraints on it?”
  • “Ok, so we claim/assume that <thing> is <better/larger/simpler/etc> than <other thing> in some intuitive sense. How do we quantify that?”
  • “So we want to assume <intuitive assumption>. What does that mean, mathematically?”
  • “Does the model so far accurately capture your intuitions, at least the parts which intuitively seem relevant to the claim?”

The questioners should also ask general explanation-support questions, like “can you give an example?”, “can you repeat/clarify that last part?”, and repeating back the questioner’s understanding of the claim so far. In particular, the questioners should remind the conjecturer to write down any components of the mathematics (i.e. variable/function definitions, assumptions, claim itself, etc) as they come up.

Key point: PROVING OR DISPROVING THE CLAIM IS NOT THE GOAL. For purposes of this activity, we do not care whether or not the claim is true; the goal is simply to formalize it enough that it could be mathematically proven/disproven. (One minor exception: if the claim seems trivially true/false at some point, that’s often evidence that some key piece of the conjecturer’s intuition has not been captured.)

Value Model

The idea behind the exercise is that translation is a scarce resource - in this case, translation of intuitions into mathematical formalism. Often, a major bottleneck to theoretical/modelling work is simply expressing the ideas mathematically.

Focusing on a particular conjecture also helps avoid a “model ALL the things” failure mode. Since there’s one particular claim we want to set up, we just work out the pieces necessary for that claim, not for a whole theory of everything.

RationalityAI
Frontpage

14

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:42 AM

Last summer when I was at the EA Hotel for TAISU I got the most value out of doing something similar. I'd host a session to "workshop" an idea I had, and it was roughly 20 minutes of setting it up and 40 minutes of back and forth with people pointing things out, stating objections, asking for clarifications, etc.. It was less structured than your approach, and I quite like this idea because it creates a level of safety my approach did not because it effectively bans the kinds of criticism (at least for the course of the conjecture workshop) that people sometimes jump to that also shut down fruitful idea development.

Thanks for sharing!