The linked article is interesting, and also suggests that it's not as simple as
The good solution is to add more Black people to the training dataset.
because the issue isn't simply "our system sometimes misclassifies people as animals", it's "our system sometimes misclassifies people as animals, and one not-so-rare case of this happens to line up with an incredibly offensive old racist slur" -- and that last bit is a subtle fact about human affairs that there's no possible way the system could have learned from looking at labelled samples of images. The dat... (read more)
Thanks! (I would not have guessed correctly.)
It would add some possibly-useful context to this review if you explained why you came to it with an axe to grind. (Just as race is both possibly-useful information and a possible source of prejudice to correct for, so also with your prior prejudices about this book.)
OK, I get it. (Or at least I think I do.) And, duh, indeed it turns out (as you were too polite to say in so many words) that I was distinctly confused.
So: Using ordinary conditionals in planning your actions commits you to reasoning like "If (here in the actual world it turns out that) I choose to smoke this cigarette, then that makes it more likely that I have the weird genetic anomaly that causes both desire-to-smoke and lung cancer, so I'm more likely to die prematurely and horribly of lung cancer, so I shouldn't smoke it", which makes wrong decisions.... (read more)
I agree that much of what's problematic about the example I gave is that the "inner" counterfactuals are themselves unclear. I was thinking that this makes the nested counterfactual harder to make sense of (exactly because it's unclear what connection there might be between them) but on reflection I think you're right that this isn't really about counterfactual nesting and that if we picked other poorly-defined (non-counterfactual) propositions we'd get a similar effect: "If it were morally wrong to eat shellfish, would humans Really Truly Have Free Will?"... (read more)
I never found Stalnaker's thesis at all plausible, not because I'd thought of the ingenious little calculation you give but because it just seems obviously wrong intuitively. But I suppose if you don't have any presuppositions about what sort of notion an implication is allowed to be, you don't get to reject it on those grounds. So I wasn't really entitled to say "Pr(A|B) is not the same thing as Pr(B=>A) for any particular notion of implication", since I hadn't thought of that calculation.
Anyway, I have just the same sense of obvious wrongness about th... (read more)
How confident are you that the "right" counterfactual primitive is something like your C(A|B) meaning (I take it) "if B were the case then A would be the case"?
The alternative I have in mind assimilates counterfactual conditionals to conditional probabilities rather than to logical implications, so in addition to your existing Pr(A|B)=... meaning "if B is the case, then here's how strongly I expect A to be the case" there's Prc(A|B)=... meaning "if B were the case -- even though that might require the world to be different from how it actually is -- then h... (read more)
I'm not 100% sure I am understanding your terminology. What does it mean to "play stag against (stag,stag)" or to "defect against cooperate/cooperate"?
If your opponent is not in any sense a utility-maximizer then I don't think it makes sense to talk about your opponent's utilities, which means that it doesn't make sense to have a payout matrix denominated in utility, which means that we are not in the situation of my second paragraph above ("The meaning generally assumed in game theory...").
We might be in the situation of my last-but-two paragraph ("Or may... (read more)
I think "X and Y are playing a game of stag hunt" has multiple meanings.
The meaning generally assumed in game theory when considering just a single game is that the outcomes in the game matrix are utilities. In that case, I completely agree with Dagon: if on some occasion you prefer to pick "hare" even though you know I will pick "stag", then we are not actually playing the stag hunt game. (Because part of what it means to be playing stag hunt rather than some other game is that we both consider (stag,stag) the best outcome.)
But there are some other situat... (read more)
Inappropriately highbrow proof of #4 (2d Sperner's lemma):
This proves a generalization: any number of dimensions, and any triangulation of the simplex in question. So, the setup is as follows. We have an n-dimensional simplex, defined by n+1 points in n-dimensional space. We colour the vertices with n+1 different colours. Then we triangulate it -- chop it up into smaller simplexes -- and we extend our colouring somehow in such a way that the vertices on any face (note: a face is the thing spanned by any subset of the vertices) of the big simplex are c... (read more)
This doesn't (I think) really have much to do with randomness as such. The relevant thing about R is that it's shared information that a hypothetical adversary doesn't get to see.
If uy isn't chosen adversarially, then our players don't care about pessimizing over uy but about something like an average over uy, and then R isn't needed. Or, if they are ultra-cautious people who universally care about worst cases, then they don't care about expectation w.r.t. R but about the worst case as R varies, and then R doesn't he... (read more)