Culture is adapted for learnability, transmissibility and recombinability for humans. To the extent AI and natural intelligence operate on similar principles, these adaptations should be expected to carry over when culture is used as a training dataset for AI. If so, an AI trained on culture would catch up to the cultural state of the art fast but have more trouble progressing beyond it.

If the above holds, one would expect rapid AI progress up to the frontiers of human culture and nearby low-hanging fruit, after which progress would slow. More speculatively, depending on our compatibility with advancing AI's outputs, humans could have an unexpectedly easy time keeping up with the AI's (relatively slower) acceleration.

Are we currently living in a culture overhang? If so, how does it affect the picture of AI timelines?

New Answer
New Comment

2 Answers sorted by

Lukas_Gloor

Oct 03, 2022

125

Arguably, a culture overhang makes it easier for AI to attain a decisive strategic advantage. As I wrote elsewhere:

If you placed an AGI into the stone age, it would be inefficient at taking over the world even with appropriately crafted output channels because stone age tools (which include stone age humans the AGI could manipulate) are neither very useful nor reliable. It would be easier for an AGI to achieve influence in 1995 when the environment contained a greater variety of increasingly far-reaching tools. But with the internet being new, particular strategies to attain power (or even just rapidly acquire knowledge) were not yet available. Today, it is arguably easier than ever for an AGI to quickly and more-or-less single-handedly transform the world.

One counter consideration to the above is that, through culture, the world may have become more efficient in the sense that there are fewer ideas to discover and exploit that give you a powerful advantage over others ("Thielian secrets"). 

(A counter-counter-consideration is that Eliezer says anyone with half a brain who's read books on nanotechnology realizes that it'll be easy to invent advanced nanotech and transform the world with it. I haven't read any book on nanotechnology yet.) 

It's might be worth distinguishing data overhang from culture overhang. You could imagine a 1500 world where everything people do gets recorded and labelled to generate massive amounts of data, and that this had been going on for millennia. Imagine that all the data available from that world would be fed into a clever AI training scheme. So, you'd have a data overhang but culture would be far behind compared to where we are now. Imagining AI takeover dynamics or fast vs. slow takeoff dynamics in that hypothetical could be illuminating.

My take is that culture overhang makes hard takeoff more likely. I wrote about this previously (see the hyperlink above). In short, I think the concept of intelligence seems under-defined in an important way. 

I want to distinguish between something like trained/crystallized intelligence (or "competence") vs. learning potential. Small differences in learning potential have compounded benefits over time, so someone with slightly better learning potential could reach a vastly better endstate in terms of trained/crystallized intelligence after a long training run:
 

If the child in the chair next to me in fifth grade was slightly more intellectually curious, somewhat more productive, and marginally better dispositioned to adopt a truth-seeking approach and self-image than I am, this could initially mean they score 100%, and I score 95% on fifth-grade tests – no big difference. But as time goes on, their productivity gets them to read more books, their intellectual curiosity and good judgment get them to read more unusually useful books, and their cleverness gets them to integrate all this knowledge in better and increasingly more creative ways. I’ll reach a point where I’m just sort of skimming things because I’m not motivated enough to understand complicated ideas deeply, whereas they find it rewarding to comprehend everything that gives them a better sense of where to go next on their intellectual journey. By the time we graduate university, my intellectual skills are mostly useless, while they have technical expertise in several topics, can match or even exceed my thinking even on areas I specialized in, and get hired by some leading AI company. The point being: an initially small difference in dispositions becomes almost incomprehensibly vast over time.

Readers can probably see how this relates to hard(er) takeoff. Quoting again from the hyperlinked comment:

The standard AI foom narrative sounds a bit unrealistic when discussed in terms of some AI system inspecting itself and remodeling its inner architecture in a very deliberate way driven by architectural self-understanding. But what about the framing of being good at learning how to learn? There’s at least a plausible-sounding story we can tell where such an ability might qualify as the “secret sauce" that gives rise to a discontinuity in the returns of increased AI capabilities. In humans – and admittedly this might be too anthropomorphic – I'd think about it in this way: If my 12-year-old self had been brain-uploaded to a suitable virtual reality, made copies of, and given the task of devouring the entire internet in 1,000 years of subjective time (with no aging) to acquire enough knowledge and skill to produce novel and for-the-world useful intellectual contributions, the result probably wouldn’t be much of a success. If we imagined the same with my 19-year-old self, there’s a high chance the result wouldn’t be useful either – but also some chance it would be extremely useful. Assuming, for the sake of the comparison, that a copy clan of 19-year olds can produce highly beneficial research outputs this way, and a copy clan of 12-year olds can’t, what does the landscape look like in between? I don’t find it evident that the in-between is gradual. I think it’s at least plausible that there’s a jump once the copies reach a level of intellectual maturity to make plans which are flexible enough at the meta-level and divide labor sensibly enough to stay open to reassessing their approach as time goes on and they learn new things. Maybe all of that is gradual, and there are degrees of dividing labor sensibly or of staying open to reassessing one’s approach – but that doesn’t seem evident to me. Maybe this works more as an on/off thing.

GPT3 has access to all this knowledge on the internet, but it doesn't do much with it. When it scans a copy of a book by Kahneman, it doesn't start to investigate its own mind for biases after what it read. If it could think, it would probably think Stephen J Gould's writing on evolutionary biology is similarly good as Richard Dawkins' writing (ouch!). But then there's a threshold of learning potential where it makes "click" and it can suddenly tell good ideas from bad ones. Now, it can identify the gems in the vast sea of ideas and information that is the internet and implement (and further develop) the good ideas. In Ajeya's recent report, she writes this:

[...] relatively shortly after deployment, Magma’s datacenter would essentially contain a populous “virtual civilization” running ahead of human civilization in its scientific and technological sophistication.

My intuition is that such virtual civilizations work the way I describe in my example with the 12-year-old vs the 19-year-old. Before a given threshold is reached, not too much may come of it even if there's a ton of activity going on. After the threshold, the subagents in the AI civilization are clever enough to divide labor sensibly, have good judgment about how to delegate and orchestrate things, get better at "learning how to learn," etc. From there, things would run away from us because we go from "culture overlap doesn't get exploited"* to "the entire culture overlap now gets exploited."

*All of the internet may get read by earlier language models or weak-generally-intelligent systems, but they simply won't do enough with the info at hand. For the same reasons the vast majority of people won't suddenly turn into polymath geniuses who understand the world deeply even if you give them some good book recommendations.

If nanotech is so easy, why haven't humans done it yet?

4the gears to ascension2y
it's not exactly "easy", but it's mostly constrained by having accurate approximate models of mesoscopic physics and other condensed matter physics. if alphafold 2 was so easy, why hadn't humans learned to do it by hand sooner, or thought of a similarly strong model to simplify it enough to work with? to be honest I am a bit confused about that, I really want to see what it is that alphafold 2 knows that we don't, but even starting on that question is a ways over my head. it seems to me that AF2 is proof that we can exceed human level condensed matter approximations with a lot of effort, but that it's not trivial. though, another take is that humans have had some pretty significant success at nanotech, but it's just really fuckin hard to match biology in warm environments, turns out. we do have reason to expect that atomically precise manufacturing out of chemicals with stronger bonds is going to happen at some point, but finding the paths through mesoscale physics that allow building complex machines out of strong bonds at that scale has been proving itself to be a difficult task.

yes and no. several restatings of the same point, may contain 1am errors, challenge my view:

much is encoded in human culture, but individual humans regularly push far beyond culture based on confluence of good learning factors, and I do still think there's more to know about the basis for strongly general algorithmic intelligence beyond "just throw a dense transformer at it haha".

reinforcement learning and empowerment objectives ought to be able to reinvent disproportionately large amounts of culture from scratch, and we should not expect to spend decades at this stage of ai capability where ai is near human algorithmic intelligence but struggling to reach it.

deepmind's strongest successes show that ability to map abstract spaces usefully maxes out well above human level, so while to some degree i agree about algorithmic intelligence being bottlenecked by cultural head start, the best algorithms for learning from culture should turn out to find the relevant parts of the human corpus much more efficiently than most humans and therefore also be able to push beyond human knowledge more efficiently.

I agree that the training data availability problem is real for current ai, but human level ai would only require human level amounts of training data, and while we're doing really well at total capability, a human who saw everything gpt3 did would be much smarter. any human, I suspect, not just ones we think of as smart - trying hard to do that much hard stuff changes a person.