Chris_Leong

Wiki Contributions

Load More

Comments

Oracle predictions don't apply to non-existent worlds

That's an interesting point. I suppose it might be viable to acknowledge that the problem taken literally doesn't require the prediction to be correct outside of the factual, but nonetheless claim that we should resolve the vagueness inherent in the question about what exactly the counterfactual is by constructing it to meet this condition. I wouldn't necessarily be strongly against this - my issue is confusion about what an Oracle's prediction necessarily entails.

Regarding, your notion about things being magically stipulated, I suppose there's some possible resemblance there with the ideas I proposed before in Counterfactuals As A Matter of Social Convention, although The Nature of Counterfactuals describes where my views have shifted to since then.

Oracle predictions don't apply to non-existent worlds

I presume Vladimir and me are likely discussing this from within the determinist paradigm in which "either the Oracle is wrong, or the choice is illusory" doesn't apply (although I propose a similar idea in Why 1-boxing doesn't imply backwards causation).

Oracle predictions don't apply to non-existent worlds

Isn't that prediction independent of your decision to grab your coat or not?

Oracle predictions don't apply to non-existent worlds

Yeah, you want either information about the available counterfactuals or information independent of your decision. Information about just the path taken isn't something you can condition on.

Oracle predictions don't apply to non-existent worlds

"The Oracle's prediction only has to apply to the world where the prediction is delivered" - My point was that predictions that are delivered in the factual don't apply to counterfactuals, but the way you've framed it is better as it handles a more general set of cases. It seems like we're on the same page.

MIRI/OP exchange about decision theory

How would I in principle estimate how many more votes go to my favored presidential candidate in a presidential election (beyond the standard answer of "1")?

 

I'm happy to see Abram Demski mention this as I've long seen this as a crucial case for trying to understand subjunctive linking.

My own answer would be the EDT answer: how much does your decision correlate with theirs?

This is my perspective as well. I can't imagine that subjunctive linking exists ontologically. That is that there isn't some objective fact in the universe, in and of itself linking someone's decision to yours, but instead it is about how you model other actors (I don't know if I still fully embrace this post, but it's still illustrative of my position).

So unless we actually start getting into the details of how you're modelling the situation, we can't really answer it. In a way this means that the concept of subjunctive linking can be a misleading frame for this question. The way this question is answered is by updating the model given the new information (that a particular person voted a particular way) rather than trying to identify some mysterious free-floating effect that we have no reason to think exists.

One way to try to understand this would be to try constructing the simplest case we can understand. So let's imagine a world where there are two candidates, Hilary and Obama. We'll assume there are 10 voters and that you have no information about the other voters apart from the fact that:

  • There's a 50% chance that every voter has a 40% chance of voting for Hilary and 60% for Obama
  • There's a 50% chance that every voter has a 60% chance of voting for Hilary and 40% for Obama

Once you've decided on your vote it should cause you to update your probability about which world you are in and then you can calculate the chance of winning the election.  Anyway, this is just a comment, but I'll probably solve this probably and put it in its own separate post afterwards.

I imagine that by constucting a whole bunch of similar scenarios we might be able to make solid progress here.

For voting in particular, if these esoteric DT considerations would change my answer, then they usually wouldn't, actually (because if the DT is important enough in my computation, then I'm part of a very small reference class of voters, and so, should mostly act like it's just my one vote anyway).

Strongly agreed and something that people often miss.
 

Extended Picture Theory or Models inside Models inside Models

Yet more prententious poetry

To understand the theory of decisions,

We must enquire into the nature of a decision,

A thing that does not truly exist.

Everything is determined.

The path of the future is fixed.

So when you decide,

There is nothing to decide.

You simply discover, what was determined in advance.

A decision only exists,

Within a model.

If there are two options,

Then there are two possible worlds.

Two possible versions of you.

A decision does not exist,

Unless a model has been built.

But a model can not be built.

Without defining a why.

We go and build models.

To tell us what to do.

To aid in decisions.

Which don't exist at all.

A model it must,

To reality match.

Our reality cannot be spoken.

The cycle of models can't be broken.

Upwards we must go.

Meta.

A decision does need,

A factual-counter,

But a counterfactual does need,

A decision indeed.

This is not a cycle.

A decision follows,

Given the 'factuals.

The 'factuals follow,

From a method to construct,

This decision's ours to make.

But first we need,

The 'factuals for the choice,

Of how to construct,

The 'factuals for the start.

A never ending cycle,

Spiralling upwards,

Here we are again.

Problems facing a correspondence theory of knowledge

Well, taking the simpler case of exacting reproducing a certain string, you could find the simplest program that produces the string similar to Kolmogorov complexity and use that as a measure of complexity.

A slightly more useful way of modelling things may be to have a bunch of different strings with different points representing levels of importance. And perhaps we produce a metric combining the Kolmovorov complexity of a decoder with the sum of the points produced where points are obtained by concatenating the desired strings with a predefined separator. For example, we might find the quotient.

One immediate issue with this is that some of the strings may contain overlapping information.  And we'd still have to produce a metric to assign importances to the strings. Perhaps a simpler case would be where the strings represent patterns in a stream via encoding a Turing machine with the Turing machines being able to output sets of symbols instead of just symbols representing the possible symbols at each locations.  And the amount of points they provide would be equal to how much of the stream it allows you to predict. (This would still require producing a representation of the universe where the amount of the stream predicted would be roughly equivalent to how useful the predictions are).

Any thoughts on this general approach?

Problems facing a correspondence theory of knowledge

I think that part of the problem is that talking about knowledge requires adopting an interpretative frame. We can only really say whether a collection of particles represents some particular knowledge from within such a frame, although it would be possible to determine the frame of minimum complexity that interprets a system as representing certain facts. In practise though, whether or not a particular piece of storage contains knowledge will depend on the interpretative frames in the environment, although we need to remember that interpretative frames can emulate other interpretative frames. ie. A human experimenting with multiple codes in order to decode a message.

Regarding the topic of partial knowledge, it seems that the importance of various facts will vary wildly from context to context and also depending on the goal. I'm somewhat skeptical that goal independent knowledge will have a nice definition.

Which counterfactuals should an AI follow?

I believe that we need to take a Conceputal Engineering approach here. That is, I don't see counterfactuals as intrinsically part of the world, but rather someone we construct. The question to answer is what purpose are we constructing these for? Once we've answer this question, we'll be 90% of the way towards constructing them.

As far as I can see, the answer is that we imagine a set of possible worlds and we notice that agents that use certain notions of counterfactuals tend to perform better than agents that don't. Of course, this raises the question of which possible worlds to consider, at which point we notice that this whole thing is somewhat circular.

However, this is less problematic than people think. Just as we can only talk about what things are true after already having taken some assumptions to be true (see Where Recursive Justification hits Bottom), it seems plausible that we might only be able to talk about possibility after having already taken some things to be possible.

Load More