Wait, do you think value uncertainty is equivalent/reducible to uncertainty about the correct prior?
Yep. Value uncertainty is reduced to uncertainty about the correct prior via the device of putting the correct values into the world as propositions.
Would that mean the correct prior to use depends on your values?
If we construe "values" as preferences, this is already clear in standard decision theory; preferences depend on both probabilities and utilities. UDT further blurs the line, because in the context of UDT, probabilities feel more like a "caring measure" expressing how much the agent cares about how things go in particular branches of possibility.
So one conflicting pair spoils the whole thing, i.e. ignoring the pair is a pareto improvement?
Unless I've made an error? If the Pareto improvement doesn't impact the pair, then gains-from-trade for both in the pair is zero, making the product of gains-from-trade zero. But the Pareto improvement can't impact the pair, since an improvement for one would be a detriment to the other.
When I try to understand the position you're speaking from, I suppose you're imagining a world where an agent's true preferences are always and only represented by their current introspectively accessible probability+utility,[1] whereas I'm imagining a world where "value uncertainty" is really meaningful (there can be a difference between the probability+utility we can articulate and our true probability+utility).
If 50% rainbows and 50% puppies is indeed the best representation of our preferences, then I agree: maximize rainbows.
If 50% rainbows and 50% puppies is instead a representation of our credences about our unknown true values, my argument is as follows: the best thing for us would be to maximize our true values (whichever of the two this is). If we assume value learning works well, then Geometric UDT is a good approximation of that best option.
Here "introspectively accessible" really means: what we can understand well enough to directly build into a machine.
This reminds me of Ramana’s question about what “enforces” normativity. The question immediately brought me back to a Peter Railton introductory lecture I saw (though I may be misremembering / misunderstanding / misquoting, it was a long time ago). He was saying that real normativity is not like the old Windows solitaire game, where if you try to move a card on top of another card illegally it will just prevent you, snapping the card back to where it was before. Systems like that plausibly have no normativity to them, when you have to follow the rules. In a way the whole point of normativity is that it is not enforced; if it were, it wouldn’t be normative.
I'm reminded of trembling-hand equilibria. Nash equilibria don't have to be self-enforcing; there can be tied-expectation actions which nonetheless simply aren't taken, so that agents could rationally move away from the equilibrium. Trembling-hand captures the idea that all actions have to have some probability (but some might be vanishingly small). Think of it as a very shallow model of where norm-violations come from: they're just random!
Evolutionarily stable strategies are perhaps an even better model of this, with self-enforcement being baked into the notion of equilibrium: stable strategies are those which cannot be invaded by alternate strategies.
Neither of these capture the case where the norms are frequently violated, however.
My notion of a function “for itself” is supposed to be that the functional mechanism somehow benefits the thing of which it’s a part. (Of course hammers can benefit carpenters, but we don’t tend to think of the hammer as a part of the carpenter, only a tool the carpenter uses. But I must confess that where that line is I don’t know, given complications like the “extended mind” hypothesis.)
Putting this in utility-theoretic terminology, you are saying that "for itself" telos places positive expectation on its own functional mechanism, or perhaps stronger, uses significant bits of its decision-making power on self-preservation.
A representation theorem along these lines might reveal conditions under which such structures are usefully seen as possessing beliefs: a part of the self-preserving structure whose telos is map-territory correspondence.
Steve
As you know, I totally agree that mental content is normative - this was a hard lesson for philosophers to swallow, or at least the ones that tried to “naturalize” mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent - my brain can represent Santa Claus even though (sorry) it can’t have any causal relation with Santa. (I don’t mean my brain can represent ideas or concepts or stories or pictures of Santa - I mean it can represent Santa.)
Ramana
Misrepresentation implies normativity, yep.
My current understanding of what's going on here:
* There's a cluster of naive theories of mental content, EG the signaling games, which attempt to account for meaning in a very naturalistic way, but fail account properly for misrepresentation. I think some of these theories cannot handle misrepresentation at all, EG, Mark of the Mental (a book about Teleosemantics) discusses how the information-theory notion of "information" has no concept of misinformation (a signal is not true or false, in information theory; it is just data, just bits). Similarly, signaling games have no way to distinguish truthfulness from a lie that's been uncovered: the meaning of a signal is what's probabilistically inferred from it, so there's no difference between a lie that the listener understands to be a lie & a true statement. So both signaling games and information theory are in the mistaken "mental content is not normative" cluster under discussion here.
* Santa is an example of misrepresentation here. I see two dimensions of misrepresentation so far:
* Misrepresenting facts (asserting something untrue) vs misrepresenting referents (talking about something that doesn't exist, like Santa). These phenomena seem very close, but we might want to treat claims about non-existent things as meaningless rather than false, in which case we need to distinguish these cases.
* simple misrepresentation (falsehood or nonexistence) vs deliberate misrepresentation (lie or fabrication).
* "Misrepresentation implies normativity" is saying that to model misrepresentation, we need to include a normative dimension. It isn't yet clear what that normative dimension is supposed to be. It could be active, deliberate maintenance of the signaling-game equilibrium. It could be a notion of context-independent normativity, EG the degree to which a rational observer would explain the object in a telic way ("see, these are supposed to fit together..."). Etc.
* The teleosemantic answer is typically one where the normativity can be inherited transitively (the hammer is for hitting nails because humans made it for that), and ultimately grounds out in the naturally-arising proto-telos of evolution by natural selection (human telic nature was put there by evolution). Ramana and Steve find this unsatisfying due to swamp-man examples.
Wearing my AI safety hat, I'm not sure we need to cover swamp-man examples. Such examples are inherently improbable. In some sense the right thing to do in such cases is to infer that you're in a philosophical hypothetical, which grounds out Swamp Man's telos in that of the philosophers doing the imagining (and so, ultimately, to evolution).
Nonetheless, I also dislike the choice to bottom everything out in biological evolution. It is not as if we have a theorem proving that all agency has to come from biological evolution. If we did, that would be very interesting, but biological evolution has a lot of "happenstance" around the structure of DNA and the genetic code. Can we say anything more fundamental about how telos arises?
I think I don't believe in a non-contextual notion of telos like Ramana seems to want. A hammer is not a doorstop. There should be little we can say about the physical makeup of a telic entity due to multiple-instantiability. The symbols chosen in a language have very weak ties to their meanings. A logic gait can be made of a variety of components. An algorithm can be implemented as a program in many ways. A problem can be solved by a variety of algorithms.
However, I do believe there may be a useful representation theorem, which says that if it is useful to regard something as telic, then we can regard it as having beliefs (in a way that should shed light on interpretability).
Let's look at a specific example: the Allais paradox. (See page 9 of the TDT paper (page 12 of the pdf) for the treatment I'm referencing.)
It is not plausible to me that the commonly-labeled-irrational behavior in the Allais paradox arises from a desire to be money-pumped. It seems more plausible, for example, that it arises from a cognitive heuristic which makes decisions by identifying the most relevant dimensions along which options differ, weighing how significant the various differences feel, and combining those results to make a decision. Moving from 100% probability to 33/34 probability feels significant because we are moving from certainty to uncertainty, whereas the difference in payouts feels relatively uncertain. The reduction in total payout feels insignificant compared to this. In contrast, moving from 34/100 to 33/100 feels insignificant compared to the reduction in total payout.
Of course, this is still consistent with a biases-as-values analysis. EG, we can treat the heuristic weights I mention as values rather than mere heuristics. Or, reaching for a different explanation, we can say that we don't want to feel like a fool in the case that we choose 33/34 and lose, when we could have had certainty. Probabilities are subjective, so no matter how much we're assured 33/34 is the true probability, we can imagine a friend with a different evaluation of the odds who finds our decision foolish. Humans evolved to avoid such criticism. A statement of 100% probability is, in some sense, taken more seriously than a statement of near-100% probability. In that case, if we lose anyway, we can blame the person who told us it was 100%, so we are absolved from any potential feeling of embarrassment. In the 33/100 vs 34/100 version, there is no such effect.
I want to say something like "the optimal resource-maximizing policy is an illusion" though. Like, it is privileging some sort of reference frame. In economics, profit maximization privileges the wellbeing of the shareholders. A more holistic view would treat all parties involved as stakeholders (employees, owners, customers, and even local communities where the company operates) and treat corporate policy as a bargaining problem between those stakeholders. This would better reflect long-term viability of strategies. (Naively profit-maximizing behavior has a tendency to create high turnover in employees, drive away customers, and turn local communities against the company.)
So yes, you can view everything as values, but I would include "resource-maximizing" in that as well.
A further question: what's at stake when you classify something as 'values'?
EG, in the Allais paradox, one thing that's at stake is whether the 'irrational' person should change their answer to be rational.
The name was by analogy to TEDx, yes. MIRI was running official MIRI workshops and we (Scott Garrabrant, me, and a few others) wanted to run similar events independently. We initially called them "mini miri workshops" or something like that, and MIRI got in touch to ask us not to call them that since it insinuates that MIRI is running them. They suggested "MIRIx" instead.
I'm trying to understand the second clause for conditional histories better.
The first clause is very intuitive, and in some sense, exactly what I would expect. I understand it as basically saying that drops elements from which can be inferred from . Makes a kind of sense!
However, if that were the end of the story, then conditional histories would obviously be the wrong tool for defining conditional orthogonality. Conditional orthogonality is supposed to tell us about conditional independence in the probability distribution. However, we know from causal graphs that conditioning can create dependence. EG, in the bayes net , A and C are independent, but if we condition on C, A and B become dependent. Therefore, conditional histories need to grow somehow. The second clause in the definition can be seen as artificially adding things to the history in order to represent that A and C have lost their independence.
What I don't yet see is how to relate these phenomena in detail. I find it surprising that the second clause only depends on E, not on X. It seems important to note that we are not simply adding the history of E[1] into the answer. Instead, it asks that the history of E itself '''factors''' into the part within and the part outside. If E and X are independent, then only the first clause comes into play. So the implications of the second clause do depend on X, even though the clause doesn't mention X.
So, is there a nice way to see how the second clause adds an "artificial history" to capture the new dependencies which X might gain when we condition on E?
In this paragraph, I am conflating the set with the partition .
This mostly made sense to me. I agree that it is a tricky question with a lot of moving pieces. In a typical RL setting, low entropy does imply low learning, as observed by Cui et al. One reason for this is because exploration is equated with randomness. RL typically works with point-estimates only, so the learning system does not track multiple hypotheses to test between. This prevents deterministic exploration strategies like VoI exploration, which explore based on the potential for gaining information, rather than just randomly.
My main point here is just to point out all these extra assumptions which are needed to make a strict connection between entropy and adaptability, making the observed empirical connection more empirical-only, IE not a connection which holds in all corner cases we can come up with.
However, I may be a bit more prone to think of humans as exploring intelligently than you are, IE, forming hypotheses and taking actions which test them, rather than just exploring by acting randomly.
I also don't buy this part:
And the last piece, entropy being subjective, would be just the point of therapy and some of the interventions described in the other recent RLHF+ papers.
My concern isn't that you're anthropomorphizing the LLM, but rather, that you may be anthropomorphizing it incorrectly. The learned policy may have close to zero entropy, but that doesn't mean that the LLM can predict its own actions perfectly ahead of time from its own subjective perspective. Meaning, my argument that adaptability and entropy are connected is a distinct phenomenon from the one noted by Cui, since the notions of entropy are different (mine being a subjective notion based on the perspective of the agent, and Cui's being a somewhat more objective one based on the randomization used to sample behaviors from the LLM).
(Note: your link for the paper by Cui et al currently points back to this post, instead.)
Perhaps I'm still not understanding you, but here is my current interpretation of what you are saying:
I see this line of reasoning as insisting on taking max-expected-utility according to your explicit model of your values (including your value uncertainty), even when you have an option which you can prove is higher expected utility according to your true values (whatever they are).
My argument has a somewhat frequentist flavor: I'm postulating true values (similar to postulating a true population frequency), and then looking for guarantees with respect to them (somewhat similar to looking for an unbiased estimator). Perhaps that is why you're finding it so counter-intuitive?
The crux of the issue seems to be whether we should always maximize our explicit estimate of expected utility, vs taking actions which we know are better with respect to our true values despite not knowing which values those are. One way to justify the latter would be via Knightian value uncertainty (ie infrabayesian value uncertainty), although that hasn't been the argument I've been trying to make. I'm wondering if a more thoroughly geometric-rationality perspective would provide another sort of justification.
But the argument I'm trying to make here is closer to just: but you know Geometric UDT is better according to your true values, whatever they are!
== earlier draft reply for more context on my thinking ==
Perhaps I'm just not understanding your argument here, and you need to spell it out in more detail? My current interpretation is that you are interpreting "care about both worlds equally" as "care about rainbows and puppies equally" rather than "if I care about rainbows, then I equally want more rainbows in the (real) rainbow-world and the (counterfactual) puppy-world; if I care about puppies, then I equally want more puppies in the (real) puppy-world and the (counterfactual) rainbow-world."
A value hypothesis is a nosy neighbor if[1] it wants the same things for you whether it is your true values or not. So what's being asserted here (your "third if" as I'm understanding it) is that we are confident we've got that kind of relationship with ourselves -- we don't want "our values to be satisfied, whatever they are" -- rather, whatever our values are, we want them to be satisfied across universes, even in counterfactual universes where we have different values.
Maximizing rainbows maximizes the expected value given our value uncertainty, but it is a catastrophe in the case that we are indeed puppy-loving. Moreover, it is an avoidable catastrophe; ...
... and now I think I see your point?
The idea that it is valuable for us to get the ASI to entangle its values with ours relies on an assumption of non-nosyness.
There is a different way to justify this assumption,
(but not "only if"; there are other ways to be a nosy neighbor)