AI Safety Camp connects you with an experienced research lead to collaborate on a research project – helping you try your fit for a potential career in AI Safety research.
The applications for AI Safety Camp’s Virtual Edition in 2023 are now open!
AI Safety Camp Virtual 8 will be a 3.5-month long online research program from 4 March to 18 June 2023, where participants form teams to work on pre-selected projects.
We value people with diverse backgrounds and skillsets, such as cybersecurity or cognitive science. Not all projects require participants to have prior experience in AI Safety, mathematics or machine learning. You will be able to read in detail about the research topics & each project’s skill requirements for our upcoming edition by following the Project Proposal links below.
Projects you can apply to…
Uncertainty -> Soft Optimization with Jeremy Gillen
Inquire into Uncontrollable Dynamics of AGI with Remmelt Ellen
Discussing and Crystallising a Research Agenda Based on Positive Attractors and Inherently Interpretable Architectures with Robert Kralisch
Cyborgism with Nicholas Kees Dupuis
Understanding Search in Transformers with Michael Ivanitskiy
Interdisciplinary Investigation of DebateGPT with Paul Bricman
Does Introspective Truthfulness Generalize in LMs? with Jacob Pfau
Inducing Human-Like Biases in Moral Reasoning LMs with Bogdan-Ionut Cirstea
Behavioral Annotation Framework for the Contextualized and Personalized Fine-Tuning of Foundation Models with Eleanor “Nell” Watson
How Should Machines Learn from Default Options? with En Qi Teo
Literature Review of the Neurological Basis of Human Values and Preferences with Linda Linsefors
Machine learning for Scientific Discovery: the Present and Future of Science-Producing AI Models with Eleni Angelou
Policy Proposals for High-Risk AI Regulation with Koen Holtman
Developing Specific Failure Stories about Uncontrollable AI with Karl von Wendt
First virtual edition – a spontaneous collage
Every team will have:
All team members are expected to work at least 10 hours per week on the project, which includes joining weekly team meetings, and communicating regularly (between meetings) with other team members about their work.
As of yet, we cannot commit to offering stipends compensation for team members, because a confirmed grant fell through. Another grantmaker is in the midst of evaluating a replacement grant for AI Safety Camp. If confirmed, team members can opt in to receive a minimum of $500 gross per month (up to $2000 for full-time work).
The RL is the person behind the research proposal. If a group forms around their topics, the RL will guide the research project, and keep track of relevant milestones. When things inevitably don’t go as planned (this is research after all) the RL is in charge of setting the new course.
The RL is part of the research team and will be contributing to research the same as everyone else on the team.
The TC is the ops person of the team. They are in charge of making sure meetings are scheduled, checks in with individuals on their task progress, etc. TC and RL can be the same person.
The role of the TC is important but not expected to take too much time (except for project management-heavy teams). Most of the time, the TC will act like a regular team member contributing to the research, same as everyone else on the team.
Other team members will work on the project under the leadership of the RL and the TC. Team members will be selected based on relevant skills, understandings and commitments to contribute to the research project.
You can contact us at email@example.com
I wonder if it would make sense to make this half-open, in the sense that you would publish on LW links to the study materials, and maybe also some of the results. So that people who didn't participate have a better idea.
There is no study material since this is not a course. If you are accepted to one of the project teams they you will work on that project.
You can read about the previous research outputs here: Research Outputs – AI Safety Camp
The most famous research to come out of AISC is the coin-run experiment.(95) We Were Right! Real Inner Misalignment - YouTube[2105.14111] Goal Misgeneralization in Deep Reinforcement Learning (arxiv.org)But the projects are different each year, so the best way to get an idea for what it's like is just to read the project descriptions.