TLDR We're announcing a new course designed to introduce students with a background in machine learning to the most relevant concepts in empirical ML-based AI safety. The course is available publicly here. Background AI safety is a small but rapidly growing field, and both younger and more experienced researchers are...
TLDR We are announcing a $20k bounty for publicly-understandable explainers of AI safety concepts. We are also releasing the results of the AI Safety Arguments competition. Background Of the technologists, ML researchers, and policymakers thinking about AI, very few are seriously thinking about AI existential safety. This results in less...
Program Overview The Machine Learning Safety Scholars program is a paid, 9-week summer program designed to help undergraduate students gain skills in machine learning with the aim of using those skills for empirical AI safety research in the future. Apply for the program here by May 31st. The course will...
The Stanford Existential Risks Initiative (SERI) recently opened applications for the second iteration of the ML Alignment Theory Scholars (MATS) Program, which aims to help aspiring alignment researchers enter the field by pairing them with established research mentors and fostering an academic community in Berkeley, California over the summer. Current...
TL;DR—We’re distributing $20k in total as prizes for submissions that make effective arguments for the importance of AI safety. The goal is to generate short-form content for outreach to policymakers, management at tech companies, and ML researchers. This competition will be followed by another competition in around a month that...
In the past six weeks, the Stanford Existential Risks Initiative (SERI) has been running a trial for the “ML Alignment Theory Scholars” (MATS) program. Our goal is to increase the number of people working on alignment theory, and to do this, we’re running a scholars program that provides mentorship, funding,...