michaelcohen

Wiki Contributions

Comments

Response to "What does the universal prior actually look like?"

I assume (6) means that your "anthropic update" scans across possible universes to find those that contain important decisions you might want to influence?

Yes, and then outputs strings from that set with probability proportional to their weight in the universal prior.

By (3) do you mean the same thing as "Simplest output channel that is controllable by advanced civilization with modest resources"?

I would say "successfully controlled" instead of controllable, although that may be what you meant by the term. (I decomposed this as controllable + making good guesses.) For some definitions of controllable, I might have given a point estimate of maybe 1 or 5 bits. But there has to be an output channel for which the way you transmit a bitstring out is the way the evolved consequentialists expect. But recasting it in these terms, implicitly makes the suggestion that the specification of the output channel can take on some of the character of (6'), makes me want to put my range down to 15-60; point estimate 25.

instead of using (1)+(2)+(3) you should compare to (6') = "Simplest program that scans across many possible worlds to find those that contain some pattern that can be engineered by consequentialists trying to influence prior."

Similarly, I would replace "can be" with "seems to have been". And just to make sure we're talking about the same thing, it takes this list of patterns, and outputs them with probability proportional to their weight in the universal prior.

Yeah, this seems like it would make some significant savings compared to (1)+(2)+(3). I think replacing parts of the story from being specified as [arising from natural world dynamics] to being specified as [picked out "deliberately" by a program] generally leads to savings.

Then the comparison is between specifying "important predictor to influence" and whatever the easiest-to-specify pattern that can be engineered by a consequentialist. It feels extremely likely to me that the second category is easier, indeed it's kind of hard for me to see any version of (6) that doesn't have an obviously simpler analog that could be engineered by a sophisticated civilization.

I don't quite understand the sense in which [worlds with consequentialist beacons/geoglyphs] can be described as [easiest-to-specify controllable pattern]. (And if you accept the change of "can be" to "seems to have been", it propagates here). Scanning for important predictors to influence does feel very similar to me to scanning for consequentialist beacons, especially since the important worlds are plausibly the ones with consequentialists.

There's a bit more work to be done in (6') besides just scanning for consequentialist beacons. If the output channel is selected "conveniently" for the consequentialists, since the program is looking for the beacons, instead of the consequentialists giving it their best guess(es) and putting up a bunch of beacons, there has to be some part of the program which aggregates the information of multiple beacons (by searching for coherence, e.g.), or else determines which beacon takes precedence, and then also determines how to interpret their physical signature as a bitstring.

Tangent: in heading down a path trying to compare [scan for "important to influence"] vs. [scan for "consequentialist attempted output messages"] just now, my first attempt had an error, so I'll point it out. It's not necessarily harder to specify "scan for X" than "scan for Y" when X is a subset of Y. For instance "scan for primes" is probably simpler than "scan for numbers with less than 6 factors".

Maybe clarifying or recasting the language around "easiest-to-specify controllable pattern" will clear this up, but can you explain more why it feels to you that [scan for "consequentialists' attempted output messages"] is so much simpler than [scan for "important-to-influence data streams"]? My very preliminary first take is that they are within 8-15 bits.

I also don't really see why you are splitting them [(4) + (5)] apart, shouldn't we just combine them into "wants to influence predictors"? If you're doing that presumably you'd both use the anthropic prior and then the treacherous turn.

I split them in part in case there is there is a contingent of consequentialists who believes that outputting the right bitstring is key to their continued existence, believing that they stop being simulated if they output the wrong bit. I haven't responded to your claim that this would be faulty metapyhsics on their part; it still seems fairly tangential to our main discussion. But you can interpret my 5 bit point estimate for (5) as claiming that 31 times out of 32 that a civilization of consequentialists tries to influence their world's output, it is in an attempt to survive. Tell me if you're interested in a longer justification that responds to your original "line by line comments" comment. 

Response to "What does the universal prior actually look like?"

Yeah, seems about right.

I think with 4, I've been assuming for the sake of argument that manipulators get free access to the right prior, and I don't have a strong stance on the question, but it's not complicated for a directly programmed anthropic update to be built on that right prior too.

I guess I can give some estimates for how many bits I think are required for each of the rows in the table. I'll give a point estimate, and a range for a 50% confidence interval for what my point estimate would be if I thought about it for an hour by myself and had to write up my thinking along the way.

I don't have a good sense for how many bits it takes to get past things that are just extremely basic, like an empty string, or an infinite string of 0s. But whatever that number is, add it to both 1 and 6.

1) Consequentialists emerge,                              10 - 50 bits; point estimate 18

2) TM output has not yet begun,                          10 - 30 bits; point estimate 18

3) make good guesses about controllable output,  18 - 150 bits; point estimate 40

4) decide to output anthropically updated prior,    8 - 35 bits; point estimate 15

5) decide to do a treacherous turn.                      1 - 12 bits; point estimate 5

vs. 6) direct program for anthropic update.           18-100 bits; point estimate 30

The ranges are fairly correlated.

Response to "What does the universal prior actually look like?"

Do you have some candidate "directly programmed anthropic update" in mind? (That said, my original claim was just about the universal prior, not about a modified version with an anthropic update)

I’m talking about the weight of an anthropically updated prior within the universal prior. I should have added “+ bits to encode anthropic update directly” to that side of the equation. That is, it takes some number of bits to encode “the universal prior, but conditioned on the strings being important to decision-makers in important worlds”. I don’t know how to encode this, but there is presumably a relatively simple direct encoding, since it’s a relatively simple concept. This is what I was talking about in my response to the section “The competition”.

One way that might be helpful about thinking about the bits saved from the anthropic update is that it is string is important to decision-makers in important worlds. I think this gives us a handle in reasoning about anthropic savings as a self-contained object, even if it’s a big number.

> bits to specify camera on earth - bits saved from anthropic update

I think the relevant number is just "log_2 of the number of predictions that the manipulators want to influence." It seems tricky to think about this (rather small) number as the difference between two (giant) numbers.

But suppose they picked only one string to try to manipulate. The cost would go way down, but then it probably wouldn’t be us that they hit. If log of the number of predictions that the manipulators want to influence is 7 bits shorter than [bits to specify camera on earth - bits saved from anthropic update], then there’s a 99% chance we’re okay. If different manipulators in different worlds are choosing differently, we can expect 1% of them to choose our world, and so we start worrying again, but we add the 7 bits back because it’s only 1% of them.

So let’s consider two Turing machines. Each row will have a cost in bits.

A                                                                  B

Consequentialists emerge,                              Directly programmed anthropic update.

make good guesses about controllable output,

decide to output anthropically updated prior.

                            Weight of earth-camera within anthropically updated prior

The last point can be decomposed into [description length of camera in our world - anthropic savings], but it doesn’t matter; it appears in both options.

I don’t think this is what you have in mind, but I’ll add another case, in case this is what you meant by “They are just looking at the earth-like Turing machine”. Maybe, just skip this though.

A                                                                       B

Consq-alists emerge in a world like ours,                 Directly prog. anthropic update.

make good guesses about controllable output,

output (strong) anth. updated prior.

Weight of earth-camera in strong anth. update      … in normal anth. update

They can make a stronger anthropic update by using information about their world, but the savings will be equal to the extra cost of specifying that the consequentialists are in a world like ours. This is basically the case I mentioned above where different manipulators choose different sets of worlds to try to influence, but then the set of manipulators that choose our world has smaller weight.

------ end potential skip

What I think it boils down to is the question:

Is the anthropically updated version of the universal prior most simply described as “the universal prior, but conditioned on the strings being important to decision-makers in important worlds” or “that thing consequentialists sometimes output”? (And consequentialists themselves may be more simply described as “those things that often emerge”). “Sometimes” is of course doing a lot of work, and it will take bits to specify which “sometimes” we are talking about. If the latter is more simple, then we might expect the natural continuation of those sequences to usually contain treacherous turns, and if the former is more simple, then we wouldn’t. This is why I don’t think the weight of an earth-camera in the universal prior ever comes into it.

But/so I don’t understand if I’m missing the point of a couple paragraphs of your comment—the one which starts “They are just looking at the earth-like Turing machine”, and the next paragraph, which I agree with.

Finite Factored Sets

I'm using some of the terminology I suggested here.

A factoring is a set of questions such that each signature of possible answers identifies a unique element. In 20 questions, you can tailor the questions depending on the answers to previous questions, and ultimately each element will have a bitstring signature depending on the history of yesses and nos. I guess you can define the question to include xors with previous questions, so that it effectively changes depending on the answers to others. But it's sometimes useful that the bitstrings are allowed to have different length. It feels like an unfortunate fact that when constructing a factoring for 7 elements, you're forced to use the factoring {"Okay, well, which element is it?"}, just because you don't want to have to answer a different number of questions for different elements. Is this a real cost? Or do we only ever construct cases where it's not?

In the directed graph of subsets, with edges corresponding to the subset relation, why not consider arbitrary subtrees? For example, for the set of 7 elements, we might have {{0, 1, 2}, {{3, 4}, {5, 6}}}. (I'm not writing it out as a tree, but that contains all the information). This corresponds to the sequence of questions: "is it less than 3?", [if yes] "is it 0, 1, or 2?", [if no], "is it less than 5?", "is it even?" Allowing different numbers of questions and different numbers of answers seems to give some extra power here. Is it meaningful?

Finite Factored Sets

I was thinking of some terminology that might make it easier to thinking about factoring and histories and whatnot.

A partition can be thought of as a (multiple-choice) question. Like for a set of words, you could have the partition corresponding to the question "Which letter does the word start with?" and then the partition groups together elements with the same answer.

Then a factoring is set of questions, where the set of answers will uniquely identify an element. The word that comes to mind for me is "signature", where an element's signature is the set of answers to the given set of questions.

For the history of a partition X, X can be thought of as a question, and the history is the subset of questions in the factoring that you need the answers to in order to determine the answer to question X.

And then two questions X and Y are orthogonal if there aren't any questions in the factoring that you need the answer to both for answering X and for answering Y.

Finite Factored Sets

I was thinking about the difficulty of finite factored sets not understanding the uniform distribution over 4 elements, and it makes me feel like something fundamental needs to be recast. An analogy came to mind about eigenvectors vs. eigenspaces.

What we might like to be true about the unit eigenvectors of a matrix is that they are the unique unit vectors for which the linear transformation preserves direction. But if two eigenvectors have the same eigenvalue, the choice of eigenvectors is not unique--we could choose any pair on that plane. So really, it seems like we shouldn't think about a matrix's eigenvectors and (potentially repeated) eigenvalues; we should think about a matrix's eigenvalues and eigenspaces, some of which might be more than 1-dimensional.

I wonder if there's a similar move to be made when defining orthogonality. Maybe (for example) orthogonality would be more conveniently defined between two sets of partitions instead of between two partitions. Probably that specific idea fails, but maybe there's something like this that could be done.

Response to "What does the universal prior actually look like?"

I take your point that we are discussing some output rules which add extra computation states, and so some output rules will add fewer computation states than others.

I'm merging my response to the rest with my comment here.

Response to "What does the universal prior actually look like?"

They are using their highest probability guess about the output channel, which will be higher probability than the output channel exactly matching some camera on old earth (but may still be very low probability). I still don't understand the relevance.

I’m trying to find the simplest setting where we have a disagreement. We don’t need to think about cameras on earth quite yet. I understand the relevance isn’t immediate.

They don't care about "their" Turing machine, indeed they live in an infinite number of Turing machines that (among other things) output bits in different ways.

I think I see the distinction between the frameworks we most naturally think about the situation. I agree that they live in an infinite number of Turing machines, in the sense that their conscious patterns appear in many different Turing machines. All of these Turing machines have weight in some prior. When they change their behavior, they (potentially) change the outputs of any of these Turing machines. Taking these Turing machines as a set, weighted by those prior weights we can consider the probability that the output obeys a predicate P. The answer to this question can be arrived at through an equivalent process. Let the inhabitants imagine that there is a correct answer to the question “which Turing machine do I really live in?” They then reason anthropically about which Turing machines give rise to such conscious experiences as theirs. They then use the same prior over Turing machines that I described above. And then they make the same calculation about the probability that “their” Turing machine outputs something that obeys the predicate P. So on the one hand, we could say that we are asking “what is the probability that the section of the universal prior which gives rise to these inhabitants outputs an output that obeys predicate P?” Or we could equivalently ask “what is the probability that this inhabitant ascribes to ‘its’ Turing machine outputting a string that obeys predicate P?”

There are facts that I find much easier to incorporate when thinking in the latter framework, such as “a work tape inhabitant knows nothing about the behavior of its Turing machine’s output tape, except that it has relative simplicity given the world that it knows.” (If it believes that its conscious existence depends on its Turing machine never having output a bit that differs from a data stream in a base world, it will infer other things about its output tape, but you seem to disagree that it would make that assumption, and I’m fine to go along with that). (If the fact were much simpler—“a work tape inhabitant knows nothing about the behavior of its Turing machine’s output tape” full stop—I would feel fairly comfortable in either framework.)

If it is the case that, for any action that a work tape inhabitant takes, the following is unchanged: [the probability that it (anthropically) ascribes to “its” Turing machine printing an output that obeys predicate P after it takes that action], then, no matter its choice of action, then the probability under the universal prior that the output obeys predicate P is also unchanged.

What if the work tape inhabitant only cares about the output when the the universal prior is being used for important applications? Let Q be the predicate [P and “the sequence begins with a sequence which is indicative of important application of the universal prior”]. The same logic that applies to P applies to Q. (It feels easier to talk about probabilities of predicates (expectations of Boolean functions) rather than expectations of general functions, but if we wanted to do importance weighting instead of using a strict predicate on importance, the logic is the same).

Writing about the fact I described above about what the inhabitants believe about their Turing machine’s output has actually clarified my thinking a bit. Here’s a predicate where I think inhabitants could expect certain actions to make it more likely that their Turing machine output obeys that predicate. “The output contains the string [particular 1000 bit string]”. They believe that their world’s output is simple given their world’s dynamics, so if they write that 1000 bit string somewhere, it is more likely for the predicate to hold. (Simple manipulations of the string are nearly equally more likely to be output).

So there are severe restrictions on the precision with which they can control even low-probability changes to the output, but not total restrictions. So I wasn’t quite right in describing it as a max-entropy situation. But the one piece of information that distinguishes their situation from one of maximum uncertainty about the output is very slight. So I think it’s useful to try to think in terms of how they get from that information to their goal for the output tape.

I was describing the situation where I wanted to maximize the probability where the output of our world obeys the predicate: “this output causes decision-maker simulators to believe that virtue pays”. I think I could very slightly increase that probability by trying to reward virtuous people around me. Consider consequentialists who want to maximize the probability of the predicate “this output causes simulator-decision-makers to run code that recreates us in their world”. They want to make the internals of their world such that there are simple relative descriptions for outputs for which that predicate holds. I guess I think that approach offers extremely limited and imprecise ability to deliberately influence the output, no matter how smart you are.

If an approach has very limited success probability, (i.e. very limited sway over the universal prior), they can focus all their effort on mimicking a few worlds, but then we’ll probably get lucky, and ours won’t be one of the ones they focus on.

From a separate recent comment,

But now that we've learned that physics is the game of life, we can make much better guesses about how to build a dataset so that a TM could output it. For example, we can:

  • Build the dataset at a large number of places.
  • [etc.]

...

I challenge you to find any plausible description of a rule that outputs the bits observed by a camera, for which I can't describe a simpler extraction rule that would output some set of bits controlled by the sophisticated civilization.

You're comparing the probability of one of these many controlled locations driving the output of the machine to the probability that a random camera does on an earth-like Turing machine drives the output. Whereas it seems to me like the right question is to look at the absolute probabilities that one of these controlled locations drives the output. The reason is that what they attempt to output is a mixture over many sequences that a decision-maker-simulator might want to know about. But if the sequence we're feeding in is from a camera on earth, than their antics only matter to the extent that their mixture puts weight on a random camera on earth. So they have to specify the random camera on an earth-like Turing machine too. They're paying the same cost, minus any anthropic update. So the costs to compare are roughly [- log prob. successful control of output + bits to specify camera on earth - bits saved from anthropic update] vs. [bits to specify camera on earth - bits saved from directly programmed anthropic update]. This framing seems to imply we can cross off [bits to specify camera on earth] from both sides.

Response to "What does the universal prior actually look like?"

Okay, now suppose they want the first N bits of the output of their Turing machine to obey predicate P, and they assign that a value of 100, and a they assign a value of 0 to any N-bit string that does not obey predicate P. And they don't value anything else. If some actions have a higher value than other actions, what information about the output tape dynamics are they using, and how did they acquire it?

Response to "What does the universal prior actually look like?"

Just look at the prior--for any set of instructions for the work tape heads of the Turing machine, flipping the "write-1" instructions of the output tape with the "write-0" instructions gives an equally probably Turing machine.

Load More