AI ALIGNMENT FORUM
AF

12
Wikitags

AI Safety Camp

Edited by Remmelt, et al. last updated 30th Dec 2024

AI Safety Camp is a non-profit initiative to run programs for diversely skilled researchers who want to try collaborate on an open problem for reducing AI existential risk.

Subscribe
Discussion
Subscribe
Discussion
Posts tagged AI Safety Camp
14AISC 2024 - Project Summaries
NickyP
2y
0
6AI Safety Camp 2024
Linda Linsefors
2y
0
28Applications for AI Safety Camp 2022 Now Open!
adamShimi
4y
0
15AI Safety Camp 10
Robert Kralisch, Linda Linsefors, Remmelt
1y
6
31AISC9 has ended and there will be an AISC10
Linda Linsefors
1y
0
32Project Intro: Selection Theorems for Modularity
CallumMcDougall, Avery, Lucius Bushnaq
4y
3
24Towards a formalization of the agent structure problem
Alex_Altair
1y
0
20Theories of Modularity in the Biological Literature
CallumMcDougall, Avery, Lucius Bushnaq
4y
0
19[Aspiration-based designs] 1. Informal introduction
B Jacobs, Jobst Heitzig, Simon Fischer, Simon Dima
1y
0
15Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn, elandgre, Beth Barnes
3y
0
14A brief review of the reasons multi-objective RL could be important in AI Safety Research
Ben Smith, Roland Pihlakas
4y
3
7Projects I would like to see (possibly at AI Safety Camp)
Linda Linsefors
2y
5
7Apply to lead a project during the next virtual AI Safety Camp
Linda Linsefors, Remmelt
2y
0
10Extraction of human preferences 👨→🤖
arunraja-hub
4y
2
5Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025)
Linda Linsefors, Remmelt Ellen, Robert Kralisch
1y
1
Load More (15/27)
Add Posts