Agent X has evidence for proposition P that agent Y does not have, and X wants to convince Y that P is true (or more precisely, increase Y's credence in P as well). However, there are the following unfortunate conditions:
1) Y doesn't trust X, meaning that X just telling Y that they have evidence will do little to convince Y. (In particular, they don't have common knowledge of their credences.)
2) X wants to reveal as little as possible about the evidence they have.
Is there anything clever X and Y can do here? Or is there a reason why this is impossible? Can one prove a tradeoff between the update on Y's credence and the amount of details about the evidence revealed? If this is too vague/does not make formal sense for some reason, as well as for illustration, here are some particular examples of what I have in mind:

Example 1: X has a picture of the Moon and wants to convince Y that the Moon exists. The picture is a selfie taken by Adolf Hitler living on the far side of the Moon; X doesn't want Y to know the various other things of which this picture or X's possession of the picture would be evidence (e.g. because X wants to maintain their edge in the "space Nazi invasion of Earth by 2030" prediction market), so just handing the picture to Y is not a good option for X.

Example 2: The State Department wants to convince the public that Russia is in the process of making a specific disinformation video while revealing as little as possible about how they found out that this is the case. See https://www.youtube.com/watch?v=8G3WtjU9XA4 and https://www.youtube.com/watch?v=rU1_BHMHsI8 (but let's ignore the question of whether they actually, in reality, in this particular case, have information or not; this is just supposed to be a hypothetical example for the purpose of this discussion).


My observations thus far & more refined versions of the previous questions:
This seems obviously close to zero-knowledge proofs, but I haven't been able to flesh out a correspondence to any satisfactory degree. Should X be trying to give something like the evidential analogue of a zero-knowledge proof of P?  Or should X be trying to give something like a zero-knowledge proof of "I have evidence that P"? Maybe these are equivalent, but one is a better way to think about it than the other? What would such a thing look like in the above examples? Is the best X can do to just generally become more trustworthy, or maybe to generally provide evidence they are trustworthy? Perhaps the reason some googling did not provide an answer is that I'm missing the right vocabulary; if so, I'd appreciate any pointers.

Edit with some further thoughts: I think one might have a chance to make the correspondence with zero-knowledge proofs work if "being a picture that, if handed to me by some particular person, would increase my credence that the Moon exists" were an NP complete problem in some unclear sense. Then maybe one could reduce it to a graph coloring problem and give a zero-knowledge proof of having a coloring of the graph (I recall at least the second step being explained well here https://www.youtube.com/watch?v=5ovdoxnfFVc by Avi Widgerson). The obvious issue is that I have no idea how to write down a polynomial time verifier for something like "being a picture that would increase my credence that the Moon exists". How can agents X and Y agree on some graph to color which should have a coloring if and only if X has evidence that the Moon exists? That would be cool, but I have no idea how to do this. If we both knew a priori that the Moon should have some hard to compute graph coloring drawn on its surface, i.e. we both knew exactly what the hard graph to color that happens to be on the moon is, but Y did not know the coloring, then you could convince me you have a picture of the moon by giving a zero-knowledge proof that you know the coloring. Is there something physical about the Moon which is sufficiently similar to such a thing? Locations of craters are easy to verify to be legit but it's hard to construct legit arrangements of craters? Giving a list of details about the Moon more generally that is computationally hard to come up with but easy to check if it makes sense? By the way, in such cases, I think X providing more details of a situation can actually increase the credence Y has in X being truthful, which interacts interestingly with https://www.lesswrong.com/posts/Yq6aA4M3JKWaQepPJ/burdensome-details (there is of course no contradiction here; I think it's just that being able to provide additional descriptive details about something in such cases is itself evidence for the describeratum, even as the describeratum becomes more detailed, and it seems possible for the balance to be in the direction of higher credence).

New Answer
New Comment

5 Answers sorted by

J Bostock

Feb 04, 2022

70

Both of your examples share the common fact that the information is verifiable at some point in the future. In this case the best option is to put down money. Or even just credibly offer to put down money.

For example, X offers to bet Y $5000 (possibly at very high odds) that in the year 2030 (after the Moon Nazis have invaded) they will provide a picture of the moon. If Y takes this bet seriously they should update. In fact all other actors A, B, C, who observe this bet will update.

The same is (sort of) true of the second case: just credibly bet some money that in the next five months Russia will release the propaganda video. Of course if you bet too much Russia might not release the video, and you might go bankrupt.

I don't think this works for the general case, although it covers a lot of smaller cases. Depends on the rate at which the value of the information you want to preserve depreciates.

Or more generally, X sends a costly signal of his belief in P. If X is the state (as in example 2) a bet is probably impractical, but doing anything that would be costly if X is false should work. But for this, it makes a big difference in what sense Y does not trust X. If Y thinks X may deceive, costly signals are good. If Y thinks X is stupid or irrational or similar, showing belief in P is useless.

ChristianKl

Feb 13, 2022

30

Zero-knowledge proofs are exactly about this kind of problem. They allow you to buy insurance about whether flight X will be canceled because you personally want to hedge your risk of being canceled without the insurance company knowing which specific flight you care about. 

If you want to stay with crypto solutions, Kleros does provide a way where a few anonymous humans could evaluate the evidence in a trustworthy manner.

There are legal means to make lying costly. Traditionally, that's what an oath, to tell the truth, is about.

The State Department could say: "We haven't persecuted enough of our people for lying, we invite the Department of Justice to persecute employees X, Y, and Z because they lied to congress". Our press people will from now on give their press briefings while being under oath. 

The problem for the State Department is that it doesn't want to give away its ability to constantly lie to the public, so it's going to be unwilling to take actions like that, that make its public statements more trustworthy. They want to have it both ways, they want to be able to both lie to the public and be believed by the public at the same time. 

Another way is to go through trusted third parties. If there are trustworthy newspapers, evidence can be shared with the journalists behind closed doors.

Slider

Feb 04, 2022

30

I developed a rule to avoid backseat gaming in that participating from the epistemic point of view of the one in controls is a non-spoilery. Like in a puzzle game you might say "You have a duck in your inventory" and "Gary mentioned something about ducks" if this just repeats information that the player already knows and doesn't overtly do the cognitive work for the player. This is not zero knowlegde but it is intended to retain the sense of agency of the active player, all things gotten in addition are what the player could have easily come up with.

If you go to a random person and say "You should go ask that person whether their underwear is red" it does not claim much about the state of the world. but based on this kind of prompt someone could have the affordance to make a as-good-as-any-other query that gets "surprising" amount of information.

For war purposes one could mention a set of coordinates which could affect the information gathering of any agent that is capable of visiting or seeing into that location. Because the actual checking of facts in not made by the hinter the revealed information should be believable. However such a hint might suggest that the hinter will benefit on action likely to result from discovering the fact.

The moon picture guy could say things like "Oh boy cameras are great inventions" trying to encourage production of photographs. Or "Stars sure are beatilful things" trying to increase the serendipitous laying of eyes on the moon. Or "We should try radical things to increase the worlds cheese supply" trying to encourage attention to low probablity myths as hail-mary financial explorations.

Measure

Feb 04, 2022

30

Simple/weak answer: Have a trusted third party verify the evidence.

Dagon

Feb 04, 2022

30

I suspect this doesn't generalize in any useful way.  SOME beliefs have partial-evidence that can be shared, or correlated evidence (including proof-of-belief) that can be shared.  Very few are amenable to an un-gamable mapping from math to experience.  It'll be very specific to the belief and the evidence.  

Wagers and prediction markets are good examples of proof-of-belief.  Y doesn't have to trust X on this topic (but does have to meta-trust X that the wager is real).  And the more participants, the less trust required in any given participant.