This is a linkpost for https://humanaligned.ai/2025

Join us at the fifth Human-Aligned AI Summer School in Prague from 22nd to 25th July 2025!

Update: We have now confirmed our speaker list with excellent speakers -- see below!
Update: We still have capacity for more excellent participants as of late June. Please help us spread the word to people who would be a good fit for the school. We also have some additional funding to financially support some of the participants.

We will meet for four intensive days of talks, workshops, and discussions covering latest trends in AI alignment research and broader framings of AI risk.

Apply now, applications are evaluated on a rolling basis.

The intended audience of the school are people interested in learning more about the AI alignment topics, PhD students, researchers working in ML/AI, and talented students. 

What to expect

The school is focused on teaching and exploring approaches and frameworks, less on presentation of the latest research results. Much of the content is technical – it is assumed the attendees understand current ML approaches and some of the underlying theoretical frameworks.

This year, we explore AI alignment through three core areas:

  • Technical alignment research: We'll examine current technical approaches including behavioral evaluations, mechanistic interpretability, scalable oversight, and model organisms of misalignment. We'll discuss recent developments in these areas and what they tell us about the potential and limitations of these methods.
  • AI strategy and systemic alignment: We'll explore topics such as timeline considerations, strategic and governance challenges around powerful AI, economic models of AI development, and risks of gradual disempowerment in a post-AGI world. We'll focus on building overall understanding and how these considerations can inform technical research.
  • Foundational frameworks: We'll visit research areas relevant to recent AI developments, such as multi-agent dynamics and cooperation, theories of agency, bounded rationality, and realistic models of goal-directed behavior. These frameworks help us understand what alignment means in complex environments containing both AIs and humans, and how to develop appropriate techniques.

The school consists of lectures and topical series, focused smaller-group workshops and discussions, expert panels, and opportunities for networking, project brainstorming and informal discussions. A detailed program will be announced shortly before the event. See the program of the previous school for an illustration of the program content and structure.

Confirmed speakers

Check out the website for updates and further details about the program.

New Comment
1 comment, sorted by Click to highlight new comments since:

Our speaker lineup is now confirmed and features an impressive roster of researchers, see the updated list above.

As of late June, we still have capacity for additional participants, and we'd appreciate your help with outreach! We have accepted a very strong cohort so far, and the summer school will be a great place to learn from our speakers, engage in insightful discussions, and connect with other researchers in AI alignment. In addition to people already familiar with AI alignment, we're particularly interested in reaching technical students and researchers who may be newer to AI alignment but bring strong backgrounds in ML/AI, mathematics, computer science, or related fields. e also have some additional funding to financially support some of the participants. 

See the website for details.

Curated and popular this week