Here are 10 of helpful pieces of advice I received in 2022:

  1. Understand things for yourself. Figure out what you actually think about this AI stuff. What do you think will happen? What concrete risks and scenarios are you worried about? (Credit to Oliver Habryka. Last year at a winter break retreat in Berkeley, he asked me to explain how I thought AI would lead to an existential catastrophe. I had a weak answer at the time, which motivated me to improve.
  2. Write about “big if true” opinions. One of the biggest mistakes junior researchers make is that they try to be too neutral. Be opinionated instead. Find something that you think other people are wrong about, and then write up why you think they’re wrong. Be open to changing your mind, but be OK with starting off by saying “let me see if I can defend this thing that I think I believe.” (Credit to Ajeya Cotra. Note thought that the version presented here has gone through an “Akash filter” and may only partially reflect Ajeya’s model.)
  3. Write concrete scenarios about your worldview. Write about what you think the world will look like, and be concrete. (Credit to Daniel Kokotajlo and what 2026 looks like.)
  4. Write about harder topics that force you to learn. Write about topics that push you to think more. Write about things that feel less like “teaching/explaining things that you already know” and more “learning things and getting less confused about things you don’t yet understand.” (Credit to Olivia Jimenez. I think earlier in the year, my posts were more like “reflections from EAG”, and these days I’m thinking more about how the community should allocate technical talent and what I think about the OpenAI alignment plan. I expect to be wrong more often and learn more rapidly.)
  5. Think about macrostrategy. (Credit to Thomas Larsen, who initiated a shift in the kind of work I’m prioritizing. I now identify less as an “AIS field-builder” and more as “someone who’s trying to become less confused about AI risk, AI governance, and AI macrostrategy.”)
  6. Think regularly about who you could learn from; message them. I think this strategy is overpowered and underutilized. I think it’s one of the few malleable things that can >3X the rate at which people learn & find opportunities. Obvious caveat that you shouldn’t expect anyone to respond & you should respect boundaries. But there are actually a lot of smart/competent people in the community who can (and want to) help. (Credit to Olivia Jimenez & various SERI-MATS mentors who I interviewed.)
  7. Think carefully about personal and professional boundaries. I was surprised at how quickly my personal/professional networks ended up overlapping, and I was surprised at how quickly I started to notice power dynamics. Apparently, it’s not uncommon for people to be like “wait what? I’m new and confused about so much-- why are people taking me seriously, or deferring to me, or thinking about how our social interactions will affect their professional life?” (Credit to Aris Richardson and various other friends).
  8. Focus on finding amazing teammates/allies. I think it’s common for people to think “what should I work on?” and less common for people to think “who should I work with?” At least for me, the latter is extremely important. 
  9. Take fewer (and shorter) meetings. Most things don’t need to be a meeting. Most 60-min meetings can be 30-min meetings, and most 30-min meetings can be “can you please send me a google doc that describes how you’ve been thinking about X?” The “if someone who I don’t know asks for a meeting, ask them for a Google doc before agreeing to meet with them” method has been great. (Credit to Olivia Jimenez.)
  10. Slow is smooth, and smooth is fast. Many things can be done 5-10X faster. Some things can’t. Apparently, surgeons have a phrase: “slow is smooth, and smooth is fast.” Notice when you’re doing something that shouldn’t be 80-20ed, and be willing to slow down. Areas where I think the “smooth and slow” approach are especially helpful: Interpersonal conflicts, mental health interventions, learning about a new topic, conflicts between different parts of the self (see IDC), and getting “unstuck” when doing research. (Credit to Everett Smith.)

Disclaimer: It’s plausible that the people I “credit” would find my summaries inaccurate or no longer endorse the advice.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 5:50 AM

Take fewer (and shorter) meetings. Most things don’t need to be a meeting.

Also, if something needs a meeting, send relevant information (or links to information) to participants before the meeting, and write a summary after the meeting and send it to participants.

Sending information before the meeting means that all participants start on the same page. If you ignore it, you may spend half of the meeting explaining things to half of the participants, that the other half already knows.

Sending a summary means creating a written record that the participants can review if they forget something, and you can also send it to people who were not present at the meeting. Also, if there was an illusion of understanding (some participants believe you concluded X, other participants believe you concluded non-X), sending a summary makes it possible to notice misunderstanding and raise an objection. If you ignore it, you may soon have another meeting on exactly the same topic, because people forget or get confused.

EDIT:

By the way, I think there is nothing wrong if some people prefer to explain things verbally in a meeting, but the meeting should be clearly marked as such, and a recording should be made if it also may concern other people. For example, instead of making a meeting "what is X and how we could use it in our project", split it into two meetings: "what is X" and "brainstorming about how to use X in our project". The first one can be skipped by people who already know what is X; and the invitation should include links to already existing resources explaining X. The result of the first one should be at least a video recording, but preferably a written record. The invitation to the second one should include the recording of the first one, plus the links to external resources.

This is some great advice. Especially 1 and 2 seem foundational for anyone trying to reliably shift the needle by a notable amount in the right direction.

Are you a researcher?

I spent the first half-3/4 of 2022 focused on AIS field-building projects. In the last few months, I've been focusing more on understanding AI risk threat models & strategy/governance research projects.

Before 2022, I was a PhD student researching scalable mental health interventions (see here).

Have you written or published any ML related papers? Perhaps you are working on that now? Why did you chose to switch from mental health to AI Alignment?