Vladimir Slepnev

Vladimir Slepnev's Comments

On the falsifiability of hypercomputation, part 2: finite input streams

Your arbitration oracle seems equivalent to the consistent guessing problem described by Scott Aaronson here. Also see the comment from Andy D proving that it's indeed strictly simpler than the halting problem.

On the falsifiability of hypercomputation

I think your argument will also work for PA and many other theories. It's known as game semantics:

The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions. Each move of the game consists of allowing the owner of the dominant connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its dominant connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy, while it will be false whenever the Falsifier has the winning strategy.

(Double-)Inverse Embedded Agency Problem

To me the problem of embedded agency isn't about fitting a large description of the world into a small part of the world. That's easy with quining, which is mentioned in the MIRI writeup. The problem is more about the weird consequences of learning about something that contains the learner.

Also, I love your wording that the problem has many faucets. Please don't edit it out :-)

cousin_it's Shortform

Edit: no point asking this question here.

Transparent Newcomb's Problem and the limitations of the Erasure framing

I see, thanks, that makes it clearer. There's no disagreement, you're trying to justify the approach that people are already using. Sorry about the noise.

Transparent Newcomb's Problem and the limitations of the Erasure framing

Well, the program is my formalization. All the premises are right there. You should be able to point out where you disagree.

Transparent Newcomb's Problem and the limitations of the Erasure framing

I couldn't understand your comment, so I wrote a small Haskell program to show that two-boxing in the transparent Newcomb problem is a consistent outcome. What parts of it do you disagree with?

Transparent Newcomb's Problem and the limitations of the Erasure framing

If you see a full box, then you must be going to one-box if the predictor really is perfect.

Huh? If I'm a two-boxer, the predictor can still make a simulation of me, show it a simulated full box, and see what happens. It's easy to formalize, with computer programs for the agent and the predictor.

Analysing: Dangerous messages from future UFAI via Oracles

It seems that coordinated erasure, chokepoints and short horizons can help with this problem as well. But if many companies or governments have their own oracles and benefit from longer horizons, it gets harder.

Analysing: Dangerous messages from future UFAI via Oracles

the bucket chain can only accelerate a future UFAI, not create one

Wait, I'm not sure that's true anymore. Imagine there are two oracles running at the same time, with independent erasure events. Can they "cooperate" to bring into existence an UFAI that will reward them both?

Load More