AI ALIGNMENT FORUM
AF

IlyaShpitser
Ω5060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities)
IlyaShpitser2y21

It's important to internalize that the intellectual world lives in the attention economy, like eveything else.

Just like "content creators" on social platforms think hard about capturing and keeping attention, so do intellectuals and academics.  Clarity and rigor is a part of that.


No one has time, energy, (or crayons, as the saying goes) for half-baked ramblings on a blog or forum somewhere.

Reply
$1000 USD prize - Circular Dependency of Counterfactuals
IlyaShpitser4y10

I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems.  It wasn't even an original idea: Spohn had something similar in 2012.

I don't think any of this stuff is interesting, or relevant for AI safety.  There's a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.

If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.

Reply
$1000 USD prize - Circular Dependency of Counterfactuals
IlyaShpitser4y20

Counterfactuals (in the potential outcome sense used in statistics) and Pearl's structural equation causality semantics are equivalent.

Reply
Progress on Causal Influence Diagrams
IlyaShpitser4y*20

https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).

---

Can't do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).

Reply
Progress on Causal Influence Diagrams
IlyaShpitser4y20

Pretty interesting.

Since you are interested in policies that operate along some paths only, you might find these of interest:

https://pubmed.ncbi.nlm.nih.gov/31565035/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330047/

We have some recent stuff on generalizing MDPs to have a causal model inside every state ('path dependent structural equation models', to appear in UAI this year).
 

Reply
What is the interpretation of the do() operator?
Answer by IlyaShpitserAug 28, 2020*00

You can read Halpern's stuff if you want an axiomatization of something like the responses to the do-operator.

Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).

Reply
No posts to display.