I've been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge. This post sketches out my reasoning for preferring academia over industry.
Thoughts on Industry Positions:
A lot of people working on AI x-risk seem to think it's better to be in industry. I think the main arguments for that side of things are:
- All the usual reasons for preferring industry, e.g. less non-research obligations, more resources.
- AGI is expected to be built in industry (e.g. by OpenAI, Google, or DeepMind), and if you're there, you can influence the decision-making around development and deployment.
I think these are good reasons, but far from definitive.
I'll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent. So if you ARE going to industry, please consider working for Google. Also Chinese companies.
My reasons for preferring academia:
- Mentorship and exponential growth: In academia, you can mentor a lot more people, and this leads to a much higher rate of exponential growth. My quick estimate is that as an academic you can produce ~10 new researchers in 5 years; in industry, it's more like ~3. I think you might also have significant, but hard-to-measure impact through teaching and other academic activities.
- Personal fit: Unlike (I think) most people in the field, I don't like coding much. I'm also not a theoretician. I am more of a "big picture" "idea person", and more of an extrovert. I like the idea of spending most of my time managing others, writing, giving talks, etc. I have far too many ideas to pursue on my own effectively. I also don't like the idea of having a boss.
- Better position for advocacy: There are many reasons I think academia makes for a better "bully pulpit".
- A tenure track faculty position at a top-20 institution is higher status than a research scientist position.
- Many academics find employees of big tech companies somewhat suspect, e.g. viewing them as sell-outs or shills to some extent.
- None of the tech companies has a sufficiently credible commitment to reducing AI x-risk (and knowing what steps to take to do that) for my taste.
- Tech companies don't support many forms of outspoken advocacy.
- Tech companies are unlikely to support governance efforts that threaten their core business model. But I think radical governance solutions are likely necessary, and that political activism in alliance with critics of big tech is likely necessary as well.
- Tenure provides much better job security than employment at tech companies.
Main crux: timelines?
A lot of people think academia only makes sense if you have longer timelines. I think this is likely true to some extent, but I think academia starts to look like a clear win within 5-10 years, so you need to be quite confident in very short timelines to think industry is a better bet. Personally, I'm also quite pessimistic about our chances for success if timelines are that short; I think we have more leverage if timelines are longer, so it might make sense to hope that we're lucky enough to live in a world where AGI is at least a decade away.
I think the main cruxes for this choice are:
2) personal fit
3) expected source of impact.
I discussed (1) and (2) already. By (3), I mean roughly: "Do you expect the research you personally conduct/lead to be your main source of impact? Or do you think your influence on others (e.g. mentoring students, winning hearts and minds of other researchers and important decision makers) will have a bigger impact?" I think for most people, influencing others could easily be a bigger source of impact, and I think more people working on reducing AI x-risk should focus on that more.
But if someone has a clear research agenda, a model of how it will substantially reduce x-risk, and a well-examined belief that their counter-factual impact on pushing the agenda forward is large, then I think there's a strong case for focusing on direct impact. I don't think this really applies to me; all of the technical research I can imagine doing seems to have a fairly marginal impact.
I've discussed this question with a good number of people, and I think I've generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?). I'd love to hear arguments people think I've missed.
EDIT: in the above, I wanted to say something more like: "I think the average trend in these conversations has been for people to update in the direction of academia being more valuable than they thought coming into the conversation". I think this is true and important, but I'm not very confident in it, and I know I'm not providing any evidence... take it with a grain of salt I guess :).
I... think we've discussed this? But I don't agree, at least insofar as the arguments are supposed to apply to me as well (so e.g. not the personal fit part).
Some potential disagreements:
Personally, I find the "bully pulpit" argument for academia most persuasive.
Btw, planned summary for the Alignment Newsletter:
Yeah we've definitely discussed it! Rereading what I wrote, I did not clearly communicate what I intended to...I wanted to say that "I think the average trend was for people to update in my direction". I will edit it accordingly.
I think the strength of the "usual reasons" has a lot to do with personal fit and what kind of research one wants to do. Personally, I basically didn't consider salary as a factor.
When you say academia looks like a clear win within 5-10 years, is that assuming "academia" means "starting a tenure-track job now?" If instead one is considering whether to begin a PhD program, for example, would you say that the clear win range is more like 10-15 years?
Also, how important is being at a top-20 institution? If the tenure track offer was instead from University of Nowhere, would you change your recommendation and say go to industry?
Would you agree that if the industry project you could work on is the one that will eventually build TAI (or be one of the leading builders, if there are multiple) then you have more influence from inside than from outside in academia?
My cut-off was probably somewhere between top-50 and top-100, and I was prepared to go anywhere in the world. If I couldn't make into top 100, I think I would definitely have reconsidered academia. If you're ready to go anywhere, I think it makes it much easier to find somewhere with high EV (but might have to move up the risk/reward curve a lot).
Yes. But ofc it's hard to know if that's the case. I also think TAI is a less important category for me than x-risk inducing AI.
Makes sense. I think we don't disagree dramatically then.
Also makes sense -- just checking, does x-risk-inducing AI roughly match the concept of "AI-induced potential point of no return" or is it importantly different? It's certainly less of a mouthful so if it means roughly the same thing maybe I'll switch terms. :)
um sorta modulo a type error... risk is risk. It doesn't mean the thing has happened (we need to start using some sort of phrase like "x-event" or something for that, I think).
I think one thing to consider is that the two paths don't have an equal % chance to succeed. Getting a tenure track position at a top 20 university is hard. Really hard. Getting a research scientist position is, based on my very uncertain and informal understanding, less hard.
This doesn't seem so relevant to capybaralet's case, given that he was choosing whether to accept an academic offer that was already extended to him.
What are your thoughts for subfields of ML where research impact/quality depends a lot on having lots of compute?
In NLP, many people have the view that almost all of the high impact work has come from industry over the past 3 years, and that the trend looks like it will continue indefinitely. Even safety-relevant work in NLP seems much easier to do with access to larger models with better capabilities (Debate/IDA are pretty hard to test without good language models). Thus, safety-minded NLP faculty might end up in a situation where none of their direct work is very impactful, but all of the expected impact is by graduating students who end up going to work in industry labs in particular. How would you think about this kind of situation?
You can try to partner with industry, and/or advocate for big government $$$.
I am generally more optimistic about toy problems than most people, I think, even for things like Debate.
Also, scaling laws can probably help here.