All of SamEisenstat's Comments + Replies

Q5 is true if (as you assumed), the space of lotteries is the space of distributions over a finite set. (For a general convex set, you can get long-line phenomena.)

First, without proof, I'll state the following generalization.

Theorem 1. Let  be a relation on a convex space  satisfying axioms A1, A2, A3, and the following additional continuity axiom. For all , the set

is open in . Then, there exists a function  from  to the long line such that  i... (read more)

Nice, I like this proof also. Maybe there's a clearer way to say thing, but your "unrolling one step" corresponds to my going from  to . We somehow need to "look two possible worlds deep".

Here's a simple Kripke frame proof of Payor's lemma.

Let  be a Kripke frame over our language, i.e.  is a set of possible worlds,  is an accessibility relation, and  judges that a sentence holds in a world. Now, suppose for contradiction that  but that , i.e.  does not hold in some world .

A bit of De Morganing tells us that the hypothesis on  is equivalent to , so . So, there is some world  with  ... (read more)

Thanks! I also wrote up my proof in another comment, which should help triangulate intuition.

Theorem. Weak HCH (and similar proposals) contain EXP.

Proof sketch: I give a strategy that H can follow to determine whether some machine that runs in time accepts. Basically, we need to answer questions of the form "Does cell have value at time ." and "Was the head in position at time ?", where and are bounded by some function in . Using place-system representations of and , these questions have length in , so they can be asked. Further, each question is a simple function of a constant number of other... (read more)

3Evan Hubinger3y
Yeah, I think that's absolutely right—I actually already have a version of my market making EXP proof for amplification that I've been working on cleaning up for publishing. But regardless of how you prove it I certainly agree that I understated amplification here and that it can in fact get to EXP.

As you say, this isn't a proof, but it wouldn't be too surprising if this were consistent. There is some such that has a proof of length by a result of Pavel Pudlák (On the length of proofs of finitistic consistency statements in first order theories). Here I'm making the dependence on explicit, but not the dependence on . I haven't looked at it closely, but the proof strategy in Theorems 5.4 and 5.5 suggests that will not depend on , as long as we only ask for the weaker property that will only be provable in length for sentenc

... (read more)
0Diffractor5y
I found an improved version by Pavel, that gives a way to construct a proof of ϕ from □nϕ that has a length of O(n). The improved version is here. There are restrictions to this result, though. One is that the C-rule must apply to the logic. This is just the ability to go from ∃x:ϕ(x) to instantiating a c such that ϕ(c). Pretty much all reasonable theorem provers have this. The second restriction is that the theory must be finitely axiomatizable. No axiom schemas allowed. Again, this isn't much of a restriction in practice, because NBG set theory, which proves the consistency of ZFC, is finitely axiomatizable. The proof strategy is basically as follows. It's shown that the shortest proof of a statement with quantifier depth n must have a length of Ω(n2), if the maximum quantifier depth in the proof is 2n or greater. This can be flipped around to conclude that if there's a length-n proof of ϕ, the maximum quantifier depth in the proof can be at most O(√n). The second part of the proof involves constructing a bounded-quantifier version of a truth predicate. By Tarski's undefinability of truth, a full truth predicate cannot be constructed, but it's possible to exhibit a formula for which it's provable that qd(¯¯¯¯ψ)≤n→(Satn(¯¯¯¯ψ,x)↔Σ(Satn,¯¯¯¯ψ,x)) (Σ is the formula laying out Tarski's conditions for something to be a truth predicate). Also, if n≥ quantifier depth of ψ, there's a proof of Satn(¯¯¯¯ψ,x)↔ψ[x] (ψ[x] is the sentence ψ with its free variables substituted for the elements enumerated in the list x) Also, there's a proof that Satn is preserved under inference rules and logical axioms, as long as everything stays below a quantifier depth of n. All these proofs can be done in O(n2) lines. One factor of n comes from the formula abbreviated as Satn(x,y) getting longer at a linear rate, and the other factor comes from having to prove Satn for each n seperately as an ingredient for the next proof. Combining the two parts, the O(√n) bound on the quantifier de

I misunderstood your proposal, but you don't need to do this work to get what you want. You can just take each sentence as an axiom, but declare that this axiom takes symbols to invoke. This could be done by changing the notion of length of a proof, or by taking axioms and with very long.

I'm having trouble thinking about what it would mean for a circuit to contain daemons such that we could hope for a proof. It would be nice if we could find a simple such definition, but it seems hard to make this intuition precise.

For example, we might say that a circuit contains daemons if it displays more optimization that necessary to solve a problem. Minimal circuits could have daemons under this definition though. Suppose that some function describes the behaviour of some powerful agent, a function is like with noise added, and our proble... (read more),,,,,

4Paul Christiano6y
I don't know what the statement of the theorem would be. I don't really think we'd have a clean definition of "contains daemons" and then have a proof that a particular circuit doesn't contain daemons. Also I expect we're going to have to make some assumption that the problem is "generic" (or else be careful about what daemon means), ruling out problems with the consequentialism embedded in them. (Also, see the comment thread with Wei Dai above, clearly the plausible version of this involves something more specific than daemons.)

Two minor comments. First, the bitstrings that you use do not all correspond to worlds, since, for example, implies , as is a subtheory of . This can be fixed by instead using a tree of sentences that all diagonalize against themselves. Tsvi and I used a construction in this spirit in A limit-computable, self-reflective distribution, for example.

Second, I believe that weakening #2 in this post also cannot be satisfied by any constant distribution. To sketch my reasoning, a trader can try to buy a sequence of sentences

... (read more)

I at first didn't understand your argument for claim (2), so I wrote an alternate proof that's a bit more obvious/careful. I now see why it works, but I'll give my version below for anyone interested. In any case, what you really mean is the probability of deciding a sentence outside of by having it announced by nature; there may be a high probability of sentences being decided indirectly via sentences in .

Instead of choosing as you describe, pick so that the probability of sampling something in is greater than . Then, the probabili

... (read more)

A few thoughts:

I agree that the LI criterion is "pointwise" in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simu

... (read more)

Universal Prediction of Selected Bits solves the related question of what happens if the odd bits are adversarial but the even bits copy the preceding odd bits. Basically, the universal semimeasure learns to do the right thing, but the exact sense in which the result is positive is subtle and has to do with the difference between measures and semimeasures. The methods may also be relevant to the questions here, though I don't see a proof for either question yet.

Yeah, I like tail dependence.

There's this question of whether for logical uncertainty we should think of it more as trying to "un-update" from a more logically informed perspective rather than trying to use some logical prior that exists at the beginning of time. Maybe you've heard such ideas from Scott? I'm not sure if that's the right perspective, but it's what I'm alluding to when I say you're introducing artificial logical uncertainty.

0Abram Demski6y
I don't think it's much like un-updating. Un-updating takes a specific fact we'd like to pretend we don't know. Plus, the idea there is to back up the inductor. Here I'm looking at average performance as estimated by the latest stage of the inductor. The artificial uncertainty is more like pretending you don't know which problem in the sequence you're at.

Yeah, the 5 and 10 problem in the post actually can be addressed using provability ideas, in a way that fits in pretty natually with logical induction. The motivation here is to work with decision problems where you can't prove statements for agent , utility function , action , and utility value , at least not with the amount of computing power provided, but you want to use inductive generalizations instead. That isn't necessary in this example, so it's more of an illustration.

To say a bit more, if you make logical inductors propositionally con

... (read more)

It's hard to analyze the dynamics of logical inductors too precisely, so we often have to do things that feel like worst-case analysis, like considering an adversarial trader with sufficient wealth. I think that problems that show up from this sort of analysis can be expected to correspond to real problems in superintelligent agents, but that is a difficult question. The malignancy of the universal prior is part of the reason.

As to your proposed solution, I don't see how it would work. Scott is proposing that the trader makes conditional contracts, which

... (read more)

In counterfactual mugging with a logical coin, AsDT always uses a logical inductor’s best-estimate of the utility it would get right now, so it sees the coin as already determined, and sees no benefit from giving Omega money in the cases where Omega asks for money.

The way I would think about what's going on is that if the coin is already known at the time that the expectations are evaluated, then the problem isn't convergent in the sense of AsDT. The agent that pays up whenever asked has a constant action, but it doesn't receive a constant expected util

... (read more)
2Abram Demski6y
I think AsDT has a limited notion of convergent problem. It can only handle situations where the optimal strategy is to make the same move each time. Tail-dependence opens this up, partly by looking at the limit of average payoff rather than the limit of raw payoff. This allows us to deal with problems where the optimal strategy is complicated (and even somewhat dependent on what's done in earlier instances in the sequence). I wasn't thinking of it as introducing artificial logical uncertainty, but I can see it that way.

This isn't too related to your main point, but every ordered field can be embedded into a field of Hahn series, which might be simpler to work with than surreals.

That page discusses the basics of Hahn series, but not the embedding theorem. (Ehrlich, 1995) treats things in detail, but is long and introduces a lot of definitions. The embedding theorem is stated on page 23 (24 in the pdf).

Nicely done. I should have spent more time thinking about liar sentences; you really can do a lot with them.

Follow-up question - is the set of limits of logical inductors convex? (Your proof also makes me curious as to whether the set of "expanded limits" is convex, though that question is a bit less nice than the other).

I'm copying over some (lightly edited) conversation about this post from Facebook.


Sam Eisenstat: In the original counterexample to the Trolljecture, I guess you're proposing that be logically prior to , so we can go back and replace with , but still have to use in order to derive ? Here I'm just using "" to mean "counterfactually results in" without meaning to imply that this should be defined modally.

I agree that this is intuitively sensible, but it is difficult to give a general definition that agrees with

... (read more)

[EDIT: This all deals with measures, not semimeasures; see below.]

For to dominate in the old sense means that its Bayes score can only be some constant worse than , no worse, regardless of environment. My definition here implies that, but I think it’s a stricter requirement.

Your definition is equivalent to the standard definition. Li and Vitányi say that dominates iff there is some such that for any binary string , we have . Li and Vitányi's "probability distributions" take a binary string as input while the probability distribu

... (read more)

Haskell doesn't actually let you do this as far as I can tell, but the natural computational way to implement a function is with a case expression with no cases. This is sensible because has no constructors, so it should be possible to pattern match it with such an expression. Another way of thinking of this is that we can use pattern matching on sum types and is just the -ary sum type.

Condition 4 in your theorem coincides with Lewis' account of counterfactuals. Pearl cites Lewis, but he also criticizes him on the ground that the ordering on worlds is too arbitrary. In the language of this post, he is saying that condition 2 arises naturally from the structure of the problem and that condition 4 is derives from the deeper structure corresponding to condition 2.

I also noticed that the function and the partial order can be read as "time of first divergence from the real world" and "first diverges before", respectively. This makes the t

... (read more)
0Diffractor5y
Yeah, when I went back and patched up the framework of this post to be less logical-omniscence-y, I was able to get 2→3→4→1, but 2 is a bit too strong to be proved from 1, because my framing of 2 is just about probability disagreements in general, while 1 requires W to assign probability 1 to ϕ.

We can also construct an example where with a short proof and also proves , but any such proof is much longer. We only need to put a bound on the proof length in A's proof search. Then, the argument that proves its own consistency still works, and is rather short: as the proof length bound increases. However, there cannot be a proof of within 's proof length bound, since if it found one it would immediately take action 1. In this case can still prove that simply by running the agent, but thi

... (read more)

The argument that I had in mind was that if , then , so since PA knows how the chicken rule works. This gives us , so PA can prove that if , then PA is inconsistent. I'll include this argument in my post, since you're right that this was too big a jump.

Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn't mention this explicitly, since I felt like I had seen it before often enough, but

... (read more)

I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:

def U():
  if A() == 2:
    if PA is consistent:
      return 10
    else:
      return 0
  else:
    return 5

The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe's theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2

... (read more)
1Benya Fallenstein9y
It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if PA⊢A()≠1, then PA is inconsistent? [edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through: The scenario is that you get 10 if you take action 1 and it's not provable that you don't take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it's provable that you don't. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves A()=1→¬□┌A()=1┐. (Then from the outside, A()=2 follows from the outside by soundness of PA.) This seems to make your argument go through if we can also show that PA doesn't show A()≠1. But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction. Thus, PA proves A()=1→U()=0 (because this follows from A()=1→¬□┌A()=1┐), and also PA doesn't prove A()=1→U()=10. As in your argument, then, the trolljecture implies that we should think "if the agent takes action 1, it gets utility 0" is a good counterfactual, and we don't think that's true. Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I'm not seeing yet. Like Patrick, I'd encourage you to develop this into a post.
0orthonormal9y
Nice! Yes, I encourage you to develop this into a post.