AI ALIGNMENT FORUM
AF

Robert Kralisch
Ω15100
Message
Dialogue
Subscribe

I am Robert Kralisch, one of the organizers for the AI Safety Camp, working as a research coordinator by evaluating and supporting research projects that fit under the umbrella of "technical AI Safety research" and "conceptually sound approaches to AI Alignment".

I'm also an independent conceptual/theoretical Alignment Researcher. I have a background in Cognitive Science and I am interested in collaborating on an end-to-end strategy for AGI alignment.

The three main branches that I aim to contribute to are conceptual clarity (what should we mean by agency, intelligence, embodiment, etc), the exploration of more inherently interpretable cognitive architectures, and Simulator theory.

One of my concrete goals is to figure out how to design a cognitively powerful agent such that it does not become a Superoptimiser in the limit. 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No Comments Found
No wikitag contributions to display.
15AI Safety Camp 10
10mo
6
5Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025)
1y
1