AI ALIGNMENT FORUM
AF

Yitzi Litt
Ω38203
Message
Dialogue
Subscribe

I'm an artist, writer, and human being.

To be a little more precise: I make video games, edit Wikipedia, and write here on LessWrong!

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
0Yitz's Shortform
5y
0
A Data limited future
Yitz3y31

Is that fair to say? How much kolmogorov complexity can be encoded by evolution at a maximum, considering that all information transferred through evolution must be encoded in a single (stem) cell? Especially when we consider how genetically similar we are to beings which don’t even have brains, I have trouble imagining that the amount of “training data” encoded by evolution is very large.

Reply
Examples of AI Increasing AI Progress
Yitz3y00

I feel like it should somewhat discount the hard takeoff model, but then again I’m not sure how hard takeoff people would have predicted the initial curve to look like here.

Reply
Six Dimensions of Operational Adequacy in AGI Projects
Yitz3y70

May I ask why you guys decided to publish this now in particular? Totally fine if you can’t answer that question, of course.

Reply
PaLM
3y
(+283)
Occam's Razor
5y
(+8/-1273)
8The Problem With The Current State of AGI Definitions
3y
3
26Testing PaLM prompts on GPT3
3y
0