Ege Erdil

If you have any questions for me or just want to talk, feel free to reach out by sending a private message on this site or by sending an e-mail to egeerdil96@gmail.com.

You can also find me on Metaculus at https://www.metaculus.com/accounts/profile/116023/, or on Discord with the username starfall7651.

Wiki Contributions

Comments

This is not quite true. Raw policy networks of AlphaGo-like models are often at a level around 3 dan in amateur rankings, which would qualify as a good amateur player but nowhere near the equivalent of grandmaster level. If you match percentiles in the rating distributions, 3d in Go is perhaps about as strong as an 1800 elo player in chess, while "master level" is at least 2200 elo and "grandmaster level" starts at 2500 elo.

Edit: Seems like policy networks have improved since I last checked these rankings, and the biggest networks currently available for public use can achieve a strength of possibly as high as 6d without MCTS. That would be somewhat weaker than a professional player, but not by much. Still far off from "grandmaster level" though.

I think you're ignoring the qualifier "literally portrayed" in Matthew's sentence, and neglecting the prior context that he's talking about AI development being something mainly driven forward by hobbyists with no outsized impacts.

He's talking about more than just the time in which AI goes from e.g. doubling the AI software R&D output of humans to some kind of singularity. The specific details Eliezer has given about this scenario have not been borne out: for example, in his 2010 debate with Robin Hanson, he emphasized a scenario in which a few people working in a basement and keeping all of their insights secret hit upon some key software innovation that enables their piece of consumer hardware to outcompete the rest of the world.

It's worth noting that Robin Hanson also said that "takeoff" is most likely to take months. He just said it for ems, and in his world, that rate of growth was being driven by the entire world economy working as a whole rather than one local part of the world having such better software that it could outcompete everyone else with vastly less material resources. I find you saying this is a "mild win" for Eliezer's prediction incomprehensible given that we live in a world where individual AI labs are being valued at ~ $100B and raising tens of billions of dollars in capital.

I find myself confused about what point this post is trying to make even after reading through it twice. Can you summarize your central point in 100 words or less?

If the title is meant to be a summary of the post, I think that would be analogous to someone saying "nuclear forces provide an untapped wealth of energy". It's true, but the reason the energy is untapped is because nobody has come up with a good way of tapping into it. A post which tried to address engineering problems around energy production by "we need to look closely at how to extract energy from the strong interaction and we need to check if we can squeeze anything out of that whenever new physics is discovered" would not be compelling.

If you come up with a strategy for how to do this then I'm much more interested, and that's a big reason why I'm asking for a summary since I think you might have tried to express something like this in the post that I'm missing.