AI ALIGNMENT FORUM
AF

(Lawrence's) Reflections on Research

Nov 14, 2022 by Lawrence Chan

This is less of a sequence and more of a loose collection of thoughts on how to orient oneself to do better research and the Bay Area AI safety community in general. A lot of the advice here is aimed at past Lawrence, especially:

  • Touch reality as soon as possible (when doing machine learning research)
  • Evaluations (of new AI Safety researchers) can be noisy

In general, most of these posts are written in ~3 hours, and contain advice for which the law of equal and opposite advice applies; consequently, I wouldn't take the advice too seriously. In particular, please don't injure yourself using any advice here. If you feel like you're suffering due to a piece of advice here, please disregard it and do something else instead.

That being said, I hope you find these thoughts useful nonetheless. 

49Touch reality as soon as possible (when doing machine learning research)
Lawrence Chan
3y
8
61Evaluations (of new AI Safety researchers) can be noisy
Lawrence Chan
2y
5
20Behavioral and mechanistic definitions (often confuse AI alignment discussions)
Lawrence Chan
2y
4