What is the AI Safety Camp?

Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:

  • build connections with others in the field
  • build your research portfolio
  • receive feedback on your research ideas and help others with theirs
  • make concrete progress on open AI safety research questions

Read more about the last research camp here, including a summary of the produced research.

What’s the structure of the camp?

The camp is preceded by 7 weeks of preparation in form of an online study group of 3-5 people, followed by a 10-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository.

What will attendants work on?

Participants will work in groups on tightly-defined research projects, for example in the following areas:

  • Strategy and Policy
  • Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
  • Value learning (IRL, approval-directed agents, wireheading, …)
  • Corrigibility / Interruptibility
  • Side Effects, Safe Exploration
  • Scalable & Informed Oversight
  • Robustness (distributional shift, adversarial examples)
  • Human Values (including philosophical and psychological approaches)

When and where?

4–14 October 2018, in Prague, Czech Republic.

Pricing

Attendance is free.

Apply

Applications and more information on aisafetycamp.com

New to LessWrong?

New Comment