TL;DR: I think new machine learning researchers often make one of two kinds of mistakes: not making enough contact with reality, and being too reluctant to form gears-level models of ML phenomena. Stereotypically, LW/AF researchers tend to make the former mistake, while academic and industry researchers tend to make the latter kind. In this post, I discuss what I mean by “touching reality” and why it’s important, speculate a bit on why people don’t do this, and then give concrete suggestions. 

Related to: Making Beliefs Pay Rent, The Feeling of Idea Scarcity, Micro-Feedback Loops and Learning, The Three Stages of RigorResearch as a Stochastic Decision ProcessChapter 22 of HPMOR.[1] 

Epistemic status: Written quickly in ~3 hours as opposed to carefully, but I'm pretty sure it's directionally correct. [2]

Acknowledgments: Thanks to Adrià Garriga-Alonso for feedback on a draft of this post and Justis Mills for copyediting help.


Introduction: two common mistakes in ML research

Broadly speaking, I think new researchers in machine learning tend to make two kinds of mistakes:

  • Not making contact with reality. This is the failure mode where a new researcher reads a few papers that their friends are excited about, forms an ambitious hypothesis about how to solve a big problem in machine learning, and then spends months drafting a detailed plan. Unfortunately, after months of effort, our new researcher realizes that the components they were planning to use do not work nearly as well as expected, and as a result they’ve wasted months of effort on a project that wasn’t going to succeed.
  • Not being willing to make gears-level models. This is the failure mode where a new researcher decides to become agnostic to why anything happens, and believes empirical results and only empirical results even when said results don’t “make sense” on reflection. The issue here is that they tend to be stuck implementing an inefficient variant of grad student descent, only able to make small amounts of incremental progress via approximate blind search, and end up doing whatever is popular at the moment. 

That’s not to say that these mistakes are mutually exclusive: embarrassingly, I think I’ve managed to fail in both ways in the past. 

That being said, this post is about the first failure mode, which I think is far more common in our community than the second. (Though I might write about the second if there's enough interest!)

Here, by “touching reality”, I mean running experiments where you check that your beliefs are right, either via writing code and running empirical ML experiments, or (less commonly) grounding your ideas either in a detailed formalism (to the level where you can write proofs for new, non-trivial theorems about said ideas)[3]. I don’t think writing code or inventing a formalism qualify by themselves (though they are helpful); touching reality requires receiving actual concrete feedback on your ideas.  

Why touch reality?

I think there’s four main reasons why you should do this:

Your ideas may be bad

When you’re new to a field, it’s probably the case that you don’t fully understand all of the key results and concepts in the field. As a result, it’s very likely the case that the ideas you come up with are bad. This is especially true for fields like machine learning that have significant amounts of tacit knowledge. By testing your ideas against reality, you get feedback on where your model of the field is deficient, and thereby can develop better ideas. Touching reality as soon as possible lets you shorten your feedback cycles, and more quickly develop an understanding of important ideas in the field. 

Other people's ideas may be bad or misleading

Many machine learning papers published in conferences (let alone ArXiv preprints) have misleading abstracts, where the results don’t support some of the headline claims. Sometimes this happens because of white lies or omissions on the authors’ part. More benignly, this often happens because the authors’ results don’t generalize as far as they thought they would. Machine learning is especially susceptible to this issue, as many ML results can be finicky and the authors’ results depend on particular quirks of their setup. Before spending months of your life building on some ideas, it’s prudent to make sure that the ideas are actually good. 

Your tools may not work the way you think they do

Relatedly, algorithms presented in papers without misleading claims can still fail due to said papers not writing down all of their key assumptions or code level optimizations. I think this rarely occurs due to deliberate deception from paper authors, instead, I think this almost entirely comes from the fact that it can be challenging to get machine learning algorithms to work reliably. Even in cases where the algorithm generally works as expected on similar domains to that in the paper, it’s often the case that errors in understanding accumulate when you put together many unfamiliar algorithms together. As a result, it’s almost always worth reproducing each of the algorithms independently, and testing that they work as expected as soon as possible. 

It helps you explain your ideas to other people

When trying to get feedback for any idea, it’s often the case that the person giving you feedback won’t fully understand it. Even worse, you might have a double illusion of transparency: both you and the other person falsely believe the communication was successful. This often happens in machine learning because of a relative lack of standard terminology in many new subfields (and especially amongst novices, who might not know the standard terminology that does exist). As a result, said feedback can be worse than useless, leading to more wasted effort. Concrete examples both help you explain your ideas more clearly, and also help you and others notice when miscommunication has occurred. As a result, I think it’s good practice to include at least a toy example (if not a preliminary result) when communicating with people you aren’t regularly collaborating with. 

Why don't people touch reality?

I don’t think that "contact reality as soon as possible" is particularly novel advice – for example, I think much of academic machine learning has absorbed this ethos (perhaps a bit too much, even), and there are many similar ideas floating around in the LessWrong/Alignment Forum. However, it’s still often the case that new researchers fail to contact reality for long periods of time. Here’s my speculations as to why this happens, which I’ll ground in my own experiences (though I have also seen them in others'):

Idea scarcity

As John Wentworth says in The Feeling of Idea Scarcity, many new researchers feel that ideas are much more scarce than they actually are, and stick to failing ideas for too long. This makes it tempting to continue polishing the first idea you have, as opposed to testing a half-baked idea. 

In my case, back in late 2016 and early 2017, I spent a month of my life working on tree-structured RNNs with attention mechanisms, since clearly natural language should be tree-shaped (and I didn’t have any other ML ideas)! However, I got bogged down on implementation details thanks to Tensorflow 0.X, and spent most of my time cleaning those up as opposed to running new experiments. It turns out that no, tree-structured RNNs are not the correct way to model language.[4] I think I would’ve noticed this a lot sooner if I spent some time constructing small toy tasks where I thought tree-structured RNNs would be better, and then training small models on those, even though I hadn’t worked out all the fiddly implementation details. And I would’ve been a lot more willing to take the troubles I had with implementation as evidence against the idea, if I didn’t feel like it was the only ML idea I would have. 

Similarly, the (false) feeling of idea scarcity often causes new people to work too much on their one idea, instead of testing their half-baked ideas on reality. 

Deference to authority

I think a lot of new researchers come in with a strong belief that academic papers (especially from prestigious authors) are authoritative sources, and therefore that the claims made in them are definitely correct and generalizable. I also think that many new researchers are (correctly) skeptical of their ability to generate true claims that contradict published results, and so tend to take published results on faith. 

One of the first projects I was involved in at CHAI involved using Bayesian neural networks to do active value learning. It seemed to me like a pretty straightforward idea: we’ll implement some Bayesian neural networks, do some variational inference to update them, and then use the resulting posterior to estimate algorithms that used value of information to select queries. At the time, I (along with many people at CHAI) were very bullish on Bayesian neural networks, given the recent slate of papers around that time (2015-2017) from impressive-seeming professors showing impressive seeming results. Unfortunately, it turned out that Bayesian neural networks were significantly trickier to get working in practice on the value learning tasks we were working with, and nothing came of the project despite several months of effort. A few months later, a research engineer at CHAI found that many Bayesian neural network algorithms (including the one we were using for our project) often failed to to approximate some toy 4-d distributions—if I had been less trusting of authoritative papers and more willing to try some toy problems, I think I would’ve saved myself a lot of effort. 

Note that I’m not saying that new researchers should throw away all of conventional wisdom. Instead, I think that new researchers should be more willing to quickly verify claims made by authoritative figures. 

Aversion to Schlepping

Finally, I think the biggest reason new ML researchers avoid contacting reality is that doing machine learning experiments or coming up with formalism to write non-trivial theorems involves a lot of tedious, unglamorous tasks—that is, it can involve a lot of schlepping. For example, data munging can be incredibly tedious, even for relatively simple NLP datasets. In contrast, thinking about new ideas and discussing them with collaborators is fun and often significantly easier. It also doesn’t help that many sources present a skewed picture of research that focuses too much on the new ideas and too little on the day-to-day work. 

In my case, I’ve put off writing code for simple experiments many, many times. In a different active value learning project, I put off doing experiments (and indeed, basically the whole project) for a full month and a half due to a strong ugh field around dealing with the fiddly bits. Probably the worst case of this for me was me not wanting to do some simple human subject studies for a paper, despite said paper being rejected from a conference explicitly because it lacked a human study. I ended up just dropping the project.[5] That being said, I think I’ve become significantly better along this axis, as I’ve done more schlep work for more projects and realized that I was overestimating the pain and tedium required to do said work. 

Of course, it’s definitely possible to go too far, and end up only doing low value, schleppy work. And obviously, I think you should always try to avoid unnecessary suffering. But as a whole, I think new researchers tend to overestimate the pain involved in schleppy work and underestimate how said work gets less tedious over time, and could benefit from some amount of pushing past their aversion. 

Concrete ways to touch reality faster

I’ll conclude with some strategies for touching reality faster:

Minimize time to (possible) failure

Insofar as you have any uncertainties that might threaten the viability of a project, you should test them as soon as possible. I often find that I’m aware of many of the ways that the projects I’m working on could go wrong. As a result, I find the cognitive strategy of trying to expose as many of a project’s points of failure as soon as possible to be helpful for coming up with experiments. In my case, I also find it helpful to directly try to show that my projects are nonviable as soon as possible. 

See Jacob Steinhardt’s Research as a Stochastic Decision Process for a more detailed discussion of this strategy. 

Create toy examples

Real machine learning applications (and machine learning theory) often feature many complexities and practical difficulties that are irrelevant to the validity of the core insights behind your project. Not only can it take quite long to get any results at all, your experiment can often be invalidated by implementation details. In contrast, a good toy example abstracts away all of the complexity, which lets you get information about the viability of your project much faster. Personally, I find it helpful to think about the minimal case that shows my insight is correct.[6]

Mock or simplify difficult components

Similarly, when working with components that are difficult to implement or train, but aren’t key uncertainties as to the viability of your project, it’s often a good idea to replace said component with a cheating implementation. For example, if you’re studying a new protocol for debate using language models, you can replace the language models with humans, which probably provides a weak upper bound on your technique’s performance. A related strategy is to replace complicated components with simple baselines. For example, even if your plan is to finetune a large language model on the debate protocol, you might be able to get some signal as to its viability by using text-davinci-003 with a well-designed prompt.  

Have good collaborators 

Finally, I think that having good collaborators was by far the most helpful strategy in helping me ground my ideas. I find that it’s significantly harder to come up with obvious tests for your own ideas than it is for others to. A good collaborator on a research project can regularly save me hours of schlepping, for example by suggesting simple tests, sharing code, or even performing the tests directly (especially in cases where they have a comparative advantage). This is especially the case when they also prioritize touching reality as soon as possible. :) 

  1. ^

    After I published this post, Sam Toyer pointed me at Michael Bernstein's concepts of  vectoring (identifying a key direction of uncertainty) and velocity (quickly iterating on ideas by testing directions of uncertainty), which seems like a good breakdown of how to touch reality. 

  2. ^

    Detailed epistemic status: I'm pretty frustrated with how slow I write, so this is an experiment in writing fast as opposed to carefully. That being said, this is ~the prevailing wisdom amongst many ML practitioners and academics, and similar ideas have been previously discussed in the LessWrong/Alignment Forum communities, so I'm pretty confident that it's directionally correct. I also believe (less confidently) that this is good advice for most kinds of research or maybe even for life in general.

  3. ^

    As Michael Dennis pithily puts it, this is the point at which the process goes from only you correcting the theory, to the theory being able to correct you. 

  4. ^

    Famously, you don’t even need the RNN parts, you only need attention.

  5. ^

    Though, to be fair, there were other circumstances - it was during the pandemic and I was feeling incredibly gloomy in general.

  6. ^

    (Edited to add:) That being said, as Scott Emmons points out in a comment below, it's important to not just have results on toy examples!

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 11:51 AM

Thanks for writing this post! (And man, if this is you deliberately writing fast and below your standards, you should lower your standards way more!). I very strongly agree with this within mechanistic interpretability and within pure maths (and it seems probably true in ML and in life generally, but those are the two areas I feel vaguely qualified to comment on).

Aversion to Schlepping

Man, I strongly relate to this one... There have been multiple instances of me having an experiment idea I put off for days to weeks, only to do it in 1-3 hours and get really useful results. I've had some success experimenting with things like speedrunning afternoons, where I drop all of my ongoing tasks, try to pick a self-contained thing that seems high priority, and sprint on getting it done ASAP (this doesn't work well for day to week schleppy tasks, but I'm more OK with sucking at those)

Under why touch reality, IMO the most important reason is that it'll help you form ideas that are good! It's much much easier to do this when you have a lot of surface area on what's actually going on, and enough experience and loose threads to spark curiosities and new ideas.

Under why don't people touch reality, honestly the strongest reason for me is just procrastination/lacking urgency (which is somewhat aversion to schlepping, but less central) - even if I know exactly what it'd be sensible to do, there's rarely a reason to do it right now rather than later.

Some more strategies I like for touching reality faster (there's some overlap with your's):

  • Try explaining your understanding to other people. Notice when you're confused about a concept, and go and try to figure out what's going on (ideally by building some kind of toy model and coding something yourself)
  • Meta strategy - learn how to use good tooling, debug issues in your workflow, and just practice running a lot of quick experiments. I find that being able to test a hypothesis about GPT-2 Small in a few minutes makes it much easier to touch reality, in a way that I just wouldn't if it took hours to days. Even if the difference in time isn't that stark, the more you have the right muscle memory, the lower the activation energy
  • Try to Murphyjitsu your ideas - assume things will go wrong, or that there's some crucial flaw in your beliefs, and use your intuition to fill in the blanks re why. Use this to generate ideas to try falsifying your plan

Thanks!

just procrastination/lacking urgency

This is probably true in general, to be honest. However, it's an explanation for why people don't do anything, and I'm not sure this differentially leads to delaying contact with reality more than say, delaying writing up your ideas in a Google doc. 

Some more strategies I like for touching reality faster

I like the "explain your ideas to other people" point, it seems like an important caveat/improvement to the "have good collaborators" strategy I describe above. I also think the meta strategy point of building a good workflow is super important!

I like the "explain your ideas to other people" point, it seems like an important caveat/improvement to the "have good collaborators" strategy I describe above

Importantly, the bar for "good person to explain ideas to" is much lower than the bar for "is a good collaborator". Finding good collaborators is hard!

Thanks for writing this! I appreciate it and hope you share more things that you write faster without totally polishing everything.

One word of caution I'd share is: beware of spending too much effort running experiments on toy examples. I think toy examples are useful to gain conceptual clarity. However, if your idea is primarily empirical (such as an improvement to a deep neural network architecture), then I would recommend spending basically zero time running toy experiments.

With deep learning, it's often the case that improvements on toy examples don't scale to being improvements on real examples. In my experience, lots of papers in reinforcement learning don't actually work because the authors only tried out the method on toy examples. (Or, they tried out the method on more complex examples, but they didn't publish those experiments because the method didn't work.) So trying out a new empirical method on a toy example provides little information about how valuable the empirical method will be on real examples.

The flipside of this warning is advice: for empirical projects, test your idea on as diverse and complex a set of tasks as is possible. The good empirical ideas are few, and extensive empirical testing is the best way a researcher can determine if their idea will stand the test of time.

When running diverse and complex experiments, it is still important to design the simplest possible experiment that will be informative, as Lawrence describes in the section "Mock or simplify difficult components." I suggest being simple (such as Lawrence's example of using text-davinci-003 instead of finetuning one's own model) rather than being toy (using a tiny or hard-coded language model).

I think this is a good word of caution. I'll edit in a link to this comment.

Unfortunately, it turned out that Bayesian neural networks were significantly trickier to get working in practice on the value learning tasks we were working with, and nothing came of the project despite several months of effort. A few months later, a research engineer at CHAI found that many Bayesian neural network algorithms (including the one we were using for our project) often failed to to approximate some toy 4-d distributions

Was that ever written up? I don't recall that result.

I don't think so, unfortunately, and it's been so long that I don't think I can find the code, let alone get it running.