AI ALIGNMENT FORUM
AF

Adam Shimi
Ω1726604063
Message
Dialogue
Subscribe

Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at Conjecture.

Blogging at For Methods.

Twitter.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Epistemic Cookbook for Alignment
Reviews for the Alignment Forum
AI Alignment Unwrapped
Deconfusing Goal-Directedness
2adamShimi's Shortform
5y
8
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo20

Typo addressed in the latest patch!

Reply
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo20

Now addressed in the latest patch!

Reply
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo20

Now addressed in the latest patch!

Reply
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo20

Thanks for the comment!

We have indeed gotten the feedback by multiple people that this part didn't feel detailed enough (although we got this much more from very technical readers than from non-technical ones), and are working at improving the arguments.

Reply1
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo30

Thanks for the comment!

We'll correct the typo in the next patch/bug fix.

As for the more direct adversarial tone of the prologue, it is an explicit choice (and is contrasted by the rest of the document). For the moment, we're waiting to get more feedback on the doc to see if it really turns people off or not.

Reply
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo20

Yep, I think you're correct.

Will correct in the next minor update. Thanks!

Reply
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo30

Good catch, I think we are indeed mixing the sizes here.

As you say, the point still stands, but we will change it in the next minor update to either compare the same size or make the difference in size explicit.

Reply1
The Compendium, A full argument about extinction risk from AGI
adamShimi10mo31

Thanks for the comment!

We want to check the maths, but if you're indeed correct we will update the numbers (and reasoning) in the next minor version.

Reply1
Biology-Inspired AGI Timelines: The Trick That Never Works
adamShimi3y2-2Review for 2021 Review

In many ways, this post is frustrating to read. It isn't straigthforward, it needlessly insults people, and it mixes irrelevant details with the key ideas.

And yet, as with many of Eliezer's post, its key points are right.

What this post does is uncover the main epistemological mistakes made by almost everyone trying their hands at figuring out timelines. Among others, there is:

  • Taking arbitrary guesses within a set of options that you don't have enough evidence to separate
  • Piling on arbitrary assumption on arbitraty assumption, leading to completely uninformative outputs
  • Comparing biological processes to human engineering in term of speed, without noticing that the optimization path is the key variable (and the big uncertainty)
  • Forcing the prediction to fit within a massively limited set of distributions, biasing it towards easy to think about distributions rather than representative ones.

Before reading this post I was already dubious of most timeline work, but this crystallized many of my objections and issues with this line of work.

So I got a lot out of this post. And I expect that many people would if they spent the time I took to analyze it in detail. But I don't expect most people to do so, and so am ambivalent on whether this post should be included in the final selection.

Reply
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
adamShimi3y20

I was mostly thinking of the efficiency assumption underlying almost all the scenarios. Critch assumes that a significant chunk of the economy always can and does make the most efficient change (everyone replacing the job, automated regulations replacing banks when they can't move fast enough). Which neglects many potential factors, like big economic actors not having to be efficient for a long time, backlash from customers, and in general all factors making economic actors and market less than efficient.

I expect that most of these factors could be addressed with more work on the scenarios.

Reply
Load More
65The Compendium, A full argument about extinction risk from AGI
10mo
13
51What I Learned Running Refine
3y
4
28Methodological Therapy: An Agenda For Tackling Research Bottlenecks
3y
5
9Refine's Third Blog Post Day/Week
3y
0
11Refine's Second Blog Post Day
3y
0
16No One-Size-Fit-All Epistemic Strategy
3y
1
23Refine's First Blog Post Day
3y
0
15Shapes of Mind and Pluralism in Alignment
3y
0
29Abstracting The Hardness of Alignment: Unbounded Atomic Optimization
3y
3
16Levels of Pluralism
3y
0
Load More
Goal-Directedness
5y
(+152)
Abstraction
5y
(+185)
Category theory
5y
(+495/-704)