Mentioned in

What does it mean to apply decision theory?

6Charlie Steiner

New Comment

1 comment, sorted by Click to highlight new comments since: Today at 7:55 AM

I feel like the logical inductor analogy still has more gas in the tank. Can we further limit the computational power and ask about the finite-time properties of some system that tries to correct its own computationally-tractable systematic errors? I feel like there's some property of "not fooling yourself" that this should help with.

Based on discussions with Stuart Armstrong and Daniel Kokotajlo.There are two conflicting ways of thinking about foundational rationality arguments such as the VNM theorem.

The second way of thinking is very useful. Philosophers, economists, and others have made some real progress thinking in this way. However, I'm going to argue that we should push for the first sort of normative principle. We should not be satisfied with normative principles which remain as unachievable ideals, giving upper bounds on performance without directly helping us get there.

This implies dealing with problems of bounded rationality. But it's not the sort of "bounded rationality" where we set out to explicitly model irrationality. We don't want to talk about partial rationality; we want notions of rationality which bounded agents can

fullysatisfy.## Approximating Rationality

In order to apply an idealized rationality, such as Bayesian superintelligence, we need to have a concept of

what it means to approximate it. This is more subtle than it may seem. You can't necessarily try to minimize some notion of distance between your behavior and the ideal behavior. For one thing, you can't compute the ideal behavior to find the distance! But, for another thing, simple imitation of the ideal behavior can go wrong. Adopting one part of an optimal policy without adopting all the other parts might put you in amuch worseposition than the one you started in.Wei Dai discusses the problem in a post about Hanson's pre-rationality concept:

Clearly, we have

someidea of which moves toward rationality are correct vs incorrect. Think about the concept of cargo-culting: pointless and ineffective imitation of a more capable agent. The problem is the absence of a formal theory.## Examples

One

possibleway of framing the problem: the VNM axioms, the Kolmogorov probability axioms, and/or other rationality frameworks give us anotion of consistency. We can check our behaviors and opinions for inconsistency. But what do we do when wenoticean inconsistency? Which parts are we supposed to change?Here are some cases where there is at least a

tendencyto update in a particular direction:If you're like me, you might have read some of those and immediately thought of a Bayesian model of the inference going on. Keep in mind that this is

supposedto be about noticingactual inconsistencies, and what we want is a model which deals directly with that. It might turn out to be a kind of meta-Bayesian model, where we approximate a Bayesian superintelligence by way of a much more bounded Bayesian view which attempts to reason about what a truly consistent view would look like. But don't fool yourself into thinking a standard one-level Bayesian picture is sufficient, just because you can look at some of the bullet points and imagine a Bayesian way to handle it.It would be quite interesting to have a general "theory of becoming rational" which had something to say about how we make decisions in cases such as I've listed.

## Logical Uncertainty

Obviously, I'm pointing in the general direction of logical uncertainty and bounded notions of rationality (IE notions of rationality which can apply to bounded agents). Particularly in the "noticing inconsistencies" framing, it sounds like this might

entirelyreduce to logical uncertainty. But I want to point at the broader problem, because (1) an example of this might not immediately look like a problem of logical uncertainty; (2) a theory of logical uncertainty, such as logical induction, might not entirely solve this problem; (3) logical uncertainty is an epistemic issue, whereas this problem applies to instrumental rationality as well; (4) even setting all that aside, it's worth pointing at the distinction between ideal notions of rationality and applicable notions of rationality as a point in itself.## The Ideal Fades into the Background

So far, it sounds like my suggestion is that we should keep our idealized notions of rationality, but also develop "theories of approximation" which tell us what it means to approach the ideals in a good way vs a bad way. However, I want to point out an interesting phenomenon: sometimes, when you get a really good notion of "approximation", the idealized notion of rationality you started with fades into the background.

## Example 1: Logical Induction

Start with the Demski Prior, which was supposed to be an idealized notion of rational belief much like the Solomonoff prior, but built for logic rather than computation. I designed the prior with approximability in mind, because I thought it should be a constraint on a normative theory that we actually be able to approximate the ideal. Scott and Benya modified the Demski prior to make it nicer, and noticed that when you do so, the approximation itself has a desirable property. The line of research called asymptotic logical uncertainty focused on such "good properties of approximations", eventually leading to logical induction.

A logical inductor is a sequence of improving belief assignments. The beliefs do converge to a probability distribution, which will have some resemblance to the modified Demski prior (and to Solomonoff's prior). However, the concept of logical induction gives a much richer theory of rationality, in which this limit plays a minor role. Furthermore, the theory of logical induction comes much closer to applying to realistic agents than "rational agents approximate a Bayesian reasoning with [some prior]".

## Example 2: Game-Theoretic Equilibria vs MAL

Game-theoretic equilibrium concepts, such as Nash equilibrium and correlated equilibrium, provide a rationality concept for games: rational agents who know that each other are rational are supposed to be in equilibrium with each other. However, most games have multiple Nash equilibria, and even more correlated equilibria. How is a rational agent supposed to decide which of these to play? Assuming only the rationality of the other players is not enough to choose one equilibrium over another. If rational agents play an equilibrium, how do they get there?

One approach to this conundrum has been to introduce refined equilibrium concepts, which admit some Nash equilibria and not others. Trembling Hand equilibrium is one such concept. This introduces a notion of "stable" equilibria, pointing out that it is implausible that agents play "unstable" equilibria. However, while this narrows things down to a single equilibrium solution in some cases, it does not do so for all cases. Other refined equilibrium concepts may leave no equilibria for some games. To get rid of the problem, one would need an equilibrium concept which (a) leaves one and only one equilibrium for every game, and (b) follows from plausible rationality assumptions. Such things have been proposed, most prominently by Harsanyi & Selten A General Theory of Equilibrium Selection in Games, but so far I find them unconvincing.

A very different approach is represented by multi-agent learning (MAL), which asks the question: can agents learn to play equilibrium strategies? In this version, agents must interact over time in order to converge to equilibrium play. (Or at least, agents simulate dumber versions of each other in an effort to figure out how to play.)

It turns out that, in MAL, there are somewhat nicer stories about how agents converge to

correlatedequilibria than there are about converging to Nash equilibria. For example, Calibrated Learning and Correlated Equilibrium (Foster & Vohra) shows that agents with a calibrated learning property will converge to correlated equilibrium in repeated play.These new rationality principles, which come from MAL, are then much more relevant to the design and implementation of game-playing agents than the equilibrium concepts which they support. Equilibrium concepts, such as correlated equilibria, tell you something about what agents converge to in the limit; the learning principles which let them accomplish that, however, tell you about the

dynamics-- what agents do at finite times, in response to non-equilibrium situations. This is more relevant to agents "on the ground", as it were.And, to the extent that requirements like calibrated learning are NOT computationally feasible, this

weakens our trust in equilibrium concepts as a rationality notion-- if there isn't a plausible story about how (bounded-) rational agents can get into equilibrium, why should we think of equilibrium as rational?So, we see that the bounded, dynamic notions of rationality are more fundamental than the unbounded, fixed-point style equilibrium concepts: if we want to deal with realistic agents, we should be more willing to adjust/abandon our equilibrium concepts in response to how nice the MAL story is, than vice versa.

## Counterexample: Complete Class Theorems

This doesn't always happen. The complete class theorems give a picture of rationality in which we

startwith the ability and willingness to take Pareto-improvements. Given this, weend upwith an agent being classically rational: having a probability distribution, and choosing actions which maximize expected utility.Given this argument, we become more confident in the usefulness of probability distributions. But why should this be the conclusion? A different way of looking at the argument could be: we don't need to think about probability distributions. All we need to think about is Pareto improvements.

Somehow, probability still seems very useful to think about. We don't switch to the "dynamic" view of agents who haven't yet constructed probabilistic beliefs, taking Pareto improvements on their way to reflective consistency. This just doesn't seem like a realistic view of bounded agents.

Yes,bounded agents are still engaged in a search for the best policy, which may involve finding new strategies which are strictly better along every relevant dimension. But bounded agencyalsoinvolves making trade-offs, when no Pareto improvement can be found. This necessitates thinking of probabilities. So it doesn't seem like we want to erase that from our picture of practical agency.Perhaps this is because, in some sense, the complete class theorems are not very good -- they don't really end up explaining a less basic thing in terms of a more basic thing. After all, when can you realistically find a pure Pareto improvement?

## Conclusion

I've suggested that we move toward notions of rationality that are fundamentally bounded (applying to agents who lack the resources to be rational in more classical senses) and dynamic (fundamentally involving learning, rather than assuming the agent already has a good picture of the world; breaking down equilibrium concepts such as those in game theory, and instead looking for the dynamics which can converge to equilibrium).

This gives us a picture of "rationality" which is more like "optimality" in computer science: in computer science, it's more typical to come up with a notion of optimality which

actually appliesto some algorithms. For example, "optimal sorting algorithm" usually refers to big-O optimality, and many sorting algorithms are optimal in that sense. Similarly, in machine learning, regret bounds are mainly interesting when they are achievable by some algorithm. (Although, it could be interesting to know a lower bound on achievable regret guarantees.)Why should notions of rationality be so far from notions of optimality? Can we take a more computer-science flavored approach to rationality?

Barring that, it should at least be of critical importance to investigate in what sense idealized notions of rationality are normative principles for bounded agents like us. What constitutes cargo-culting rationality, vs really becoming more rational? What kind of adjustments should an irrational agent make when irrationality is noticed?