James Payor

I think about AI alignment; send help.

I'm also on twitter. More links on my homepage payor.io.

Wikitag Contributions

Comments

Sorted by

Also, here's a proof that a bot is never exploited. It only cooperates when its partner provably cooperates.

First, note that , i.e. if cooperates it provably cooperates. (Proof sketch: .)

Now we show that (i.e. if chooses to cooperate, its partner is provably cooperating):

  1. We get by distributing.
  2. We get by applying internal necessitation to .
  3. By (1) and (2), .

(PS: we can strengthen this to , by noticing that .)

Anyhow, regarding probability distributions, there's some philosophical difficulty in my opinion about "grounding". Specifically, what reason should I have to trust that the probability distribution is doing something sensible around my safety questions of interest? How did we construct things such that it was?

The best approach I'm aware of to building a computable (but not practical) distribution with some "grounding" results is logical induction / Garrabrant induction. They come with have a self-trust result of the form that logical inductors will, across time, converge to predicting their future selves' probabilities agree with their current probabilities. If I understand correctly, this includes limiting toward predicting a conditional probability  for an event  if we are given that the future inductor assigns probability .

...however, as I understand, there's still scope for any probability distributions we try to base on logical inductors to be "ungrounded", in that we only have a guarantee that ungrounded/adversarial perturbations must be "finite" across the limit to infinity.

Here is something more technical on the matter that I alas haven't made the personal effort to read through: https://www.lesswrong.com/posts/5bd75cc58225bf067037556d/logical-inductor-tiling-and-why-it-s-hard

In a more realistic and complicated setting, we may definitely want to be obtaining a high probability under some distribution we trust to be well-grounded, as our condition for a chain of trust. In terms of the technical difficulty I'm interested in working through, I think it should be possible to get satisfying results about proving that another proof system is correct, and whatnot, without needing to invoke probability distributions. To the extent that you can make things work with probabilistic reasoning, I think they can also be made to work in a logic setting, but we're currently missing some pieces.

My belief is that this one was fine, because self-reference occurs only under quotation, so it can be constructed by modal fixpoint / quining. But that is why the base definition of "good" is built non-recursively.

Is that what you were talking about?

(Edit: I've updated the post to be clearer on this technical detail.)

I continue to think there's something important in here!

I haven't had much success articulating why. I think it's neat that the loop-breaking/choosing can be internalized, and not need to pass through Lob. And it informs my sense of how to distinguish real-world high-integrity vs low-integrity situations.

By "gag order" do you mean just as a matter of private agreement, or something heavier-handed, with e.g. potential criminal consequences?

I have trouble understanding the absolute silence we seem to be having. There seem to be very few leaks, and all of them are very mild-mannered and are failing to build any consensus narrative that challenges OA's press in the public sphere.

Are people not able to share info over Signal or otherwise tolerate some risk here? It doesn't add up to me if the risk is just some chance of OA trying to then sue you to bankruptcy, especially since I think a lot of us would offer support in that case, and the media wouldn't paint OA in a good light for it.

I am confused. (And I grateful to William for at least saying this much, given the climate!)

Awesome, thanks for writing this up!

I very much like how you are giving a clear account for a mechanism like "negative reinforcement suppresses text by adding contextual information to the model, and this has more consequences than just suppressing text".

(In particular, the model isn't learning "just don't say that", it's learning "these are the things to avoid saying", which can make it easier to point at the whole cluster?)

I tried to formalize this, using  as a "poor man's counterfactual", standing in for "if Alice cooperates then so does Bob". This has the odd behaviour of becoming "true" when Alice defects! You can see this as the counterfactual collapsing and becoming inconsistent, because its premise is violated. But this does mean we need to be careful about using these.

For technical reasons we upgrade to , which says "if Alice cooperates in a legible way, then Bob cooperates back". Alice tries to prove this, and legibly cooperates if so.

This setup gives us "Alice legibly cooperates if she can prove that, if she legibly cooperates, Bob would cooperate back". In symbols, .

Now, is this okay? What about proving ?

Well, actually you can't ever prove that! Because of Lob's theorem.

Outside the system we can definitely see cases where  is unprovable, e.g. because Bob always defects. But you can't prove this inside the system. You can only prove things like "" for finite proof lengths .

I think this is best seen as a consequence of "with finite proof strength you can only deny proofs up to a limited size".

So this construction works out, perhaps just because two different weirdnesses are canceling each other out. But in any case I think the underlying idea, "cooperate if choosing to do so leads to a good outcome", is pretty trustworthy. It perhaps deserves to be cached out in better provability math.

(Thanks also to you for engaging!)

Hm. I'm going to take a step back, away from the math, and see if that makes things less confusing.

Let's go back to Alice thinking about whether to cooperate with Bob. They both have perfect models of each other (perhaps in the form of source code).

When Alice goes to think about what Bob will do, maybe she sees that Bob's decision depends on what he thinks Alice will do.

At this junction, I don't want Alice to "recurse", falling down the rabbit hole of "Alice thinking about Bob thinking about Alice thinking about--" and etc.

Instead Alice should realize that she has a choice to make, about who she cooperates with, which will determine the answers Bob finds when thinking about her.

This manouvre is doing a kind of causal surgery / counterfactual-taking. It cuts the loop by identifying "what Bob thinks about Alice" as a node under Alice's control. This is the heart of it, and imo doesn't rely on anything weird or unusual.

Load More