AI ALIGNMENT FORUM
AF

804
Meta-theory of rationality

Meta-theory of rationality

Feb 20, 2025 by Cole Wyeth

Here I speculate about questions such as:

What makes a theory of rationality useful or useless?

When is a theory of rationality useful for building agents, describing agents, or becoming a better agent, and to what extent should the answers be connected?

 How elegant should we expect algorithms for intelligence to be?

What concepts deserve to be promoted to the root/core design of an AGI versus discovered by AGI? Perhaps relatedly, does human cognition have such a root/core algorithm, and if so, what is it?

4Levels of analysis for thinking about agency
Cole Wyeth
8mo
0
10Action theory is not policy theory is not agent theory
Cole Wyeth
2y
0
4What makes a theory of intelligence useful?
Cole Wyeth
8mo
0
7Existing UDTs test the limits of Bayesianism (and consistency)
Cole Wyeth
7mo
0
18Glass box learners want to be black box
Cole Wyeth
5mo
0
13Modeling versus Implementation
Cole Wyeth
5mo
3
9Pitfalls of Building UDT Agents
Cole Wyeth
3mo
0