Zachary Stein-Perlman

Undergraduate and EA organizer at Williams. Prospective longtermist researcher. AI Impacts intern June–September 2022.

Some things I'd be excited to talk about:

  • What happens after an intelligence explosion
  • What happens if most people appreciate AI
  • International relations in the context of powerful AI
  • Policy responses to AI — what's likely to happen and what would be good

Posts

Sorted by New

Wiki Contributions

Comments

Prizes for ELK proposals

Ask dumb questions! ... we encourage people to ask clarifying questions in the comments of this post (no matter how “dumb” they are)

ok... disclaimer: I know little about ML and I didn't read all of the report.

All of our counterexamples are based on an ontology mismatch between two different Bayes nets, one used by an ML prediction model (“the predictor”) and one used by a human.

I am confused. Perhaps the above sentence is true in some tautological sense I'm missing. But in the sections of the report listing training strategies and corresponding counterexamples, I wouldn't describe most counterexamples as based on ontology mismatch. And the above sentence seems in tension with this from the report:

We very tentatively think of ELK as having two key difficulties: ontology identification and learned optimization. ... We don’t think these two difficulties can be very precisely distinguished — they are more like genres of counterexamples

So: do some of your training strategies work perfectly in the nice-ontology case, where the model has a concept of "the diamond is in the room"? If so, I missed this in the report and this feels like quite a strong result to me; if not, there are counterexamples based on things other than ontology mismatch.

The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument

Ha, I wrote a comment like yours but slightly worse, then refreshed and your comment appeared. So now I'll just add one small note:

To the extent that (1) normatively, we care much more about the rest of the universe than our personal lives/futures, and (2) empirically, we believe that our choices are much more consequential if we are non-simulated than if we are simulated, we should in practice act as if there are greater odds that we are non-simulated than we have reason to believe for purely epistemic purposes. So in practice, I'm particularly interested in (C) (and I tentatively buy SIA doomsday as explained by Katja Grace).

Edit: also, isn't the last part of this sentence from the post wrong:

SIA therefore advises not that the Great Filter is ahead, but rather that we are in a simulation run by an intergalactic human civilization, without strong views on late filters for unsimulated reality.