AI ALIGNMENT FORUMTags
AF

AI Safety Camp

EditHistorySubscribe
Discussion (0)
Help improve this page
EditHistorySubscribe
Discussion (0)
Help improve this page
AI Safety Camp
Random Tag
Contributors
3Remmelt Ellen
3EricB

AI Safety Camp (AISC) is a non-profit initiative to run programs for diversely skilled researchers who want to try collaborate on an open problem for reducing AI existential risk.

Official Website

Posts tagged AI Safety Camp
Most Relevant
2
28Applications for AI Safety Camp 2022 Now Open!
Adam Shimi
6mo
0
1
30Project Intro: Selection Theorems for Modularity
TheMcDouglas, Avery, Lblack
2mo
3
1
16Theories of Modularity in the Biological Literature
TheMcDouglas, Avery, Lblack
2mo
0
0
13A brief review of the reasons multi-objective RL could be important in AI Safety Research
Ben Smith
8mo
3
0
10Extraction of human preferences 👨→🤖
arunraja-hub
9mo
2
0
40Discussion: Objective Robustness and Inner Alignment Terminology
Jack Koch, Lauro Langosco
1y
6
0
31Empirical Observations of Objective Robustness Failures
Jack Koch, Lauro Langosco
1y
5
0
21Survey on AI existential risk scenarios
Sam Clarke, Alexis Carlier, Jonas Schuett
1y
2
0
15A survey of tool use and workflows in alignment research
Logan Riggs Smith, Jan Hendrik Kirchner, janus, jacquesthibs
2mo
2
0
11Acknowledging Human Preference Types to Support Value Learning
Nandi, Sabrina and Erin
4y
0
0
2Open Problems in Negative Side Effect Minimization
Fabian Schimpf, Lukas Fluri
15d
1
Add Posts