New Comment
12 comments, sorted by Click to highlight new comments since: Today at 3:24 AM

Funny timing. I'm actually in the process of working on https://calibration-training.netlify.app/ and am planning to post some sort of initial alpha release sort of thing to LessWrong soon! I need to seed the database with more questions first though. Right now there are only 10. I have a script and approach that should make it easy enough to get tens of thousands soon enough. This is helpful though. I'll look through the existing resources and see if there's anything I can use to improve my app.

The Open Philanthropy and 80,000 Hours links are for the same app, just at different URLs.

I made an Android app based on http://acritch.com/credence-game/ you can find here.  

And funny timing for me too, I just hosted a web version of the Aumann Agreement Game at https://aumann.io/ (Most likely more riddled with bugs than a dumpster mattres) last week and was holding off testing it until I had some free time to post about it.

This looks super neat, thank you for sharing. I just did a quick test and can confirm that it is in fact riddled with bugs. If it would help, I can write up a list of what needs fixing.

That would be helpful if you have the time, thanks!

Well the biggest problem is that it doesn't seem to work. I tested in a 2-player game where we both locked in an answer, but the game didn't progress to the next round. I waited for the timer to run out, but it still didn't progress to the next round, just stayed at 0:00. Changes in my probability are also not visible to the other players until I lock mine in.

A few more minor issues:

  • After locking in a probability, there's no indication in the UI that I've done so. I can even press the "lock" button again and get the same popup, despite the fact that it's already locked. It would be better to have the lock button disappear or grey out, and/or have some other clear visual indicator that it's locked.
  • If two people join with the same username, the game seems to think that they're all the same person. They all show up as "you" on the player list, although they are given different answers.
  • "Wendy's" shows up as "Wendy's", and same problem for all other words containing apostrophes. (Probably because you're setting element.innerText rather than element.innerHTML, or something like that.)
  • If I try to join an invalid room, nothing happens, which is confusing. It would be better to have some sort of error message displayed.
  • It's possible to join a room with a blank username by accident. Fixing that is a pain due to:
  • If in the waiting room someone quits and rejoins, they'll show up as a third player. It doesn't seem possible to remove a player from the game, you have to cancel and create a whole new game.
  • Pressing the "back" button in the browser has some very unintuitive results. I'd expect it to take me back to the homepage, but it seems to either leave me in the same game or put me back into a previous game.
  • And not a bug, but it would be nice to be able to customize the timer length.

Thanks! I'll look into these. Refactoring the entire frontend codebase is probably worth it, considering I wrote it months ago and it's kinda embarrassing to look back at.

This is fantastic. We used Critch's calibration game and the Metaculus calibration trainer for our our Practical Decision-Theory course but it's always good to have a very wide variety of exercises and questions.

It would be nice if you wrote a short paragraph for each link, "requires download", "questions are from 2011", or you sorted the list somehow :)

Metaculus has a calibration tutorial too: https://www.metaculus.com/tutorials/

I've been thinking about adding a calibration exercise to https://manifold.markets as well, so I'm curious: what makes one particular set of calibration exercises more valuable than another? Better UI? Interesting questions? Legible or shareable results?

Questions about a topic that I don't know about result in me just putting the max entropy distribution on that question, which is fine if it's rare, but leads to unhelpful results if they make up a large proportion of all the questions. Most calibration tests I found pulled from generic trivia categories such as sports, politics, celebrities, science, and geography. I didn't find many that were domain-specific, so that might be a good area to focus on.

Some of them don't tell me what the right answers are at the end, or even which questions I got wrong, which I found unsatisfying. If there's a question that I marked as 95% and got wrong, I'd like to know what it was so that I can look into that topic further.

It's easiest to get people to answer small numbers of questions (<50), but that leads to a lot of noise in the results. A perfectly calibrated human answering 25 questions at 70% confidence could easily get 80% or 60% of them right and show up as miscalibrated. Incorporating statistical techniques to prevent that would be good. (For example, calculate the standard deviation for that number of questions at that confidence level, and only tell the user that they're over/under confident if they fall outside it.) The fifth one in my list above does something neat where they say "Your chance of being well calibrated, relative to the null hypothesis, is X percent". I'm not sure how that's calculated though.

Nice! Added these to the wiki on calibration: https://www.lesswrong.com/tag/calibration