Motivation

There is already a fair amount of interest around Effective Altruism in judgemental forecasting. We think there’s a whole lot of good research left to be done.

The valuable research seems to be all over the place. We could use people to speculate on research directions, outline incentive mechanisms, try novel forecasting questions with friends, and outline new questions that deserve forecasts. Some of this requires a fair amount of background knowledge, but a lot doesn’t. 

The EA and LW communities have a history of using prizes to encourage work in exciting areas. We’re going to try one in forecasting research. If this goes well, we’d like to continue and expand this going forward.

Prize

This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).

Rules

To enter, first make a public post online between now and Jan 1, 2021. We encourage you to either post directly or make a link post to either LessWrong or the EA Forum. Second, complete this form, also before Jan 1, 2021. 

Research Feedback

If you’d like feedback or would care to discuss possible research projects, please do reach out! To do so, fill out this form. We’re happy to advise at any stages of the process. 

Judges

The judges will be AlexRJL, Nuño Sempere, Eric Neyman, Tamay Besiroglu, Linch Zhang and Ozzie Gooen. The details of the judging process will vary depending on how many submissions we get. We’ll try to select winners for their importance, novelty, and presentation.

Some Possible Research Areas

Areas of work we would be excited to see explored:

  • Operationalizing questions in important domains so that they can be predicted in e.g., Metaculus. This is currently a significant bottleneck; it’s surprisingly difficult to write good questions. Examples in the past have been the Ragnarök or the Animal Welfare series. A possible suggestion might be to try to come up with forecastable fire alarms for AGI. Tamay Besiroglu has suggested a “S&P 500 but for AI forecasts,” i.e., a group of forecasting questions which track something useful for AI (or for other domains.)
  • Small experiments where you and/or a group of people use forecasting for your own decision making, and write up what you’ve learned. For example, set up a Foretold community to decide on which research document you want to write up next. Predictions as a Substitute for Reviews is an example here.
  • New forecasting approaches, or forecasting tools being used in new and interesting ways, or applied to new domains. For example, Amplifying generalist research via forecasting, or Ought’s AI timelines forecasting thread.
  • Estimable or gears-level models of the world that are well positioned to be used in forecasting. For example, a decomposition informed by one’s own expertise of a difficult question into smaller questions, each of which can be then forecasted. Recent work by CSET-foretell would be an example of this.
  • Suggestions for or basic implementation of better tooling for forecasters, like a Bayes rule calculator for considering many pieces of evidence, a Laplace law calculator, etc.
  • New theoretical schemes which propose solutions to current problems around forecasting. For a recent example, see Time Travel Markets for Intellectual Accounting.
  • Elicitation of expert forecasters of useful questions. For example, the probabilities of the x-risks outlined in The Precipice.
  • Overviews of existing research, or thoughts or reflections on existing prediction tournaments and similar. For example, Zvi’s posts on prediction markets, here and here.
  • Figuring out why some puzzling behavior happens in current prediction markets or forecasting tournaments, like in Limits of Current US Prediction Markets (PredictIt Case Study). For a new puzzle suggested by Eric Neyman, consider that PredictIt is thought to be limited because it caps trades at $850, has various fees, etc, which makes it not the sort of market that big, informed players can enter and make efficient. But that fails to explain why markets without such caps, such as FTX, have prices similar to PredictIt. So, is PredictIt reasonable or is FTX unreasonable? If the former, why is there such a strong expert consensus against what PredictIt says so often? If the latter, why is FTX unreasonable?
  • Comments on existing posts can themselves be very valuable. Feel free to submit a list of good comments instead of one single post.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:24 PM

I suggest that you allow submission of posts written before this announcement. This incentivizes behavior that people expect might later be subject to prizes.

Thanks for the idea. I'm hesitant to do it for this round at least. One of the main reasons why we are doing this is to test the hypothesis that this will encourage more writing, and giving much of the prize to previous entries would work against that. 

I'm curious though, do you have thoughts on what a proposal would look like? Like, we accept entries from the last month, or last year? 

I would note that if you want feedback on recently written posts, I'd be happy to help there. Reaching out seems fine to me.

I'm curious though, do you have thoughts on what a proposal would look like?

Suggestion: Paying the most undervalued post on the topic, whenever it was written, assuming the writer is still alive or cryopreserved. "undervalue" meaning amount the post is worth minus amount the writers received.

That's a great way to incentivize people to write their ideas on the topic! I doubt I'll participate, since I know next to nothing about forecasting, but I hope you get a lot of great submissions.

I also like the range of allowed submissions, for example

  • Comments on existing posts can themselves be very valuable. Feel free to submit a list of good comments instead of one single post.