All of Yitz's Comments + Replies

Is that fair to say? How much kolmogorov complexity can be encoded by evolution at a maximum, considering that all information transferred through evolution must be encoded in a single (stem) cell? Especially when we consider how genetically similar we are to beings which don’t even have brains, I have trouble imagining that the amount of “training data” encoded by evolution is very large.

2Tassilo Neubauer10mo
I don't see the problem. Your learning algorithm doesn't have to be "very" complicated. It has to work. Machine learning models don't consist of million lines of code. I do see the problem where one might expect evolution not to be very good at doing that compression, but I find the argument that there would actually be lots of bits needed very unconvincing.

I feel like it should somewhat discount the hard takeoff model, but then again I’m not sure how hard takeoff people would have predicted the initial curve to look like here.

5Daniel Kokotajlo1y
On behalf of hard takeoff people (and as someone who is like 50% one of them) the hard takeoff model predicts this stuff pretty much just as well as the "continuous models," i.e. is pretty much zero surprised by these data points. (I put continuous in scare quotes because IMO it's a rhetorical weasel word that invites motte-and-bailey tactics -- the motte being "surely the burden of proof should be on whoever thinks the straight line on a graph will suddenly break or bend" and the bailey being "therefore the burden of proof is on whoever thinks that there won't be a multi-year period in which the world is going crazy due to powerful AGIs transforming the economy while still humans are in control because the AGIs aren't superhuman yet." I prefer the slow vs. fast takeoff terminology, or soft vs. hard.)

May I ask why you guys decided to publish this now in particular? Totally fine if you can’t answer that question, of course.

It's been high on some MIRI staff's "list of things we want to release" over the years, but we repeatedly failed to make a revised/rewritten version of the draft we were happy with. So I proposed that we release a relatively unedited version of Eliezer's original draft, and Eliezer said he was okay with that (provided we sprinkle the "Reminder:  This is a 2017 document" notes throughout).

We're generally making a push to share a lot of our models (expect more posts soon-ish), because we're less confident about what the best object-level path is to ensu... (read more)