This is a link post for two AI safety programs we’ve just opened applications for: https://www.constellation.org/programs/astra-fellowship and  https://www.constellation.org/programs/researcher-program

Constellation is a research center dedicated to safely navigating the development of transformative AI. We’ve previously helped run the ML for Alignment Bootcamp (MLAB) series and Redwood’s month-long research program on model internals (REMIX) in addition to a variety of other field-building programs & events.[1]

This winter, we are running two programs aimed at growing and supporting the ecosystem of people working on AI safety: 

  • The Constellation Visiting Researcher Program provides an opportunity for around 20 researchers to connect with leading AI safety researchers, exchange ideas, and find collaborators while continuing their research from our offices in Berkeley, CA. The funded program will take place this winter from the 8th of January 2024 to the 1st of March 2024.
  • The Astra Fellowship provides an opportunity for around 20 people to conduct research in AI safety with experienced advisors. Fellows will be based out of the Constellation office, allowing them to connect and exchange ideas with leading AI safety researchers. The program will take place in Berkeley, CA between January 8 and April 1, 2024. 

Applications for both are due November 10, 11:59pm anywhere on Earth. You can apply to the Astra Fellowship here and the Visiting Researcher Program here. If you are unsure about your fit, please err on the side of applying. We especially encourage women and underrepresented minorities to apply. You can refer others who you think might be a good fit through this form

Logistics: Housing and travel expenses are covered for both programs, and Astra fellows will receive an additional monetary stipend. The start and end dates for both programs are flexible. 

Questions? Email programs@constellation.org or ask them below. 

  1. ^

    Over 15 participants from these past programs are now working on AI safety at Anthropic, ARC Evals, ARC Theory, Google DeepMind, OpenAI, Open Philanthropy, and Redwood Research.

New Comment
2 comments, sorted by Click to highlight new comments since:

Note that the astra fellowship link in italics at the top goes to the researcher program not the astra fellowship

Thanks, Neel! It should be fixed now.