This is a linkpost for a review of Ajeya Cotra's Biological Anchors report (see also update here) that I wrote in April 2022. It's since won a prize from the EA criticism and red-teaming contest, so I thought it might be good to share here for further discussion.

Here's a summary from the judges of the red-teaming contest:

This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).

Note that a link to a summary/review of the book Principles of Deep Learning Theory on page 8 has been moved here: More Recent Progress in the Theory of Neural Networks.

New Comment
1 comment, sorted by Click to highlight new comments since:

Thanks for this! I think it is a well-written and important critique. I don't agree with it though, but unfortunately I am not sure how to respond. Basically you are taking a possibility--that there is some special sauce architecture in the brain that is outside the space of current algorithms & that we don't know how to find via evolution because it's complex enough that if we just try to imitate evolution we'll probably mess up and draw our search space to exclude it, or make the search space too big and never find it even with 10^44 flops--and saying "this feels like 50% likely to me" and Ajeya is like "no no it feels like 10% to me" and I'm like "I'm being generous by giving it even 5%, I don't see how you could look at the history of AI progress so far & what we know about the brain and still take this hypothesis seriously" But it's just coming down to different intuitions/priors. (Would you agree with this characterization?)