This is a special post for quick takes by Matthew "Vaniver" Gray. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
3 comments, sorted by Click to highlight new comments since: Today at 6:33 AM

I've been thinking a lot about 'parallel economies' recently. One of the main differences between 'slow takeoff' and 'fast takeoff' predictions is whether AI is integrated into the 'human civilization' economy or constructing a separate 'AI civilization' economy. Maybe it's worth explaining a bit more what I mean by this: you can think of 'economies' as collections of agents who trade with each other. Often it will have a hierarchical structure, and where we draw the lines are sort of arbitrary. Imagine a person who works at a company and participates in its internal economy, and the company participates in national and global economies, and the person participates in those economies as well. A better picture has a very dense graph with lots of nodes and links between groups of nodes whose heaviness depends on the number of links between nodes in those groups.

As Adam Smith argues, the ability of an economy to support specialization of labor depends on its size. If you have an island with a single inhabitant, it doesn't make sense to fully employ a farmer (since a full-time farmer can generate much more food than a single person could eat), for a village with 100 inhabitants it doesn't make sense to farm more than would feed a hundred mouths, and so on. But as you make more and more of a product, investments that have a small multiplicative payoff become better and better, to the point that a planet with ten billion people will have massive investment in farming specialization that make it vastly more efficient per unit than the village farming system. So for much of history, increased wealth has been driven by this increased specialization of labor, which was driven by the increased size of the economy (both through population growth and decreased trade barriers widening the links between economies until they effectively became one economy).

One reason to think economies will remain integrated is because increased size benefits all actors in the economy on net; another is that some of the critical links will be human-human links, or that human-AI links will be larger than AI-AI links. But if AI-AI links have much lower friction cost, then it will be the case that the economy formed just of AI-AI links can 'separate' from the total civilizational economy, much in the way that the global economy could fragment through increased trade barriers or political destabilization (as has happened many times historically, sometimes catastrophically). More simply, it could be the case that all the interesting things are happening in the AI-only economy, even if it's on paper linked to the human economy. Here, one of the jobs of AI alignment could be seen as making sure that either there's continuity of value between the human-human economy and the AI-AI economy, or ensuring that the human-AI links remain robust so that humans are always relevant economic actors.

One challenge for theories of embedded agency over Cartesian theories is that the 'true dynamics' of optimization (where a function defined over a space points to a single global maximum, possibly achieved by multiple inputs) are replaced by the 'approximate dynamics'. But this means that by default we get the hassles associated with numerical approximations, like when integrating differential equations. If you tell me that you're doing Euler's Method on a particular system, I need to know lots about the system and about the particular hyperparameters you're using to know how well you'll approximate the true solution. This is the toy version of trying to figure out how a human reasons through a complicated cognitive task; you would need to know lots of details about the 'hyperparameters' of their process to replicate their final result.

This makes getting guarantees hard. We might be able to establish what the 'sensible' solution range for a problem is, but establishing what algorithms can generate sensible solutions under what parameter settings seems much harder. Imagine trying to express what the set of deep neural network parameters are that will perform acceptably well on a particular task (first for a particular architecture, and then across all architectures!).

"Myopia" involves competitiveness reduction to the extent that the sort of side effects it's trying to rule out are useful. Is the real-world example of speculative execution (and related tech) informative as a test case?

One simple version is that when doing computation A, you adjust your probability of having to do computations like computation A, so that in the future you can do that computation more quickly. But this means there's a side channel that can be used to extract information that should be private; this is what Spectre and Meltdown were about. Various mitigations were proposed, various additional attacks developed, and so on; at one point I saw an analysis that suggested 10 years of improvements had to be thrown out because of these attacks, whereas others suggest that mitigation could be quite cheap.

One downside from the comparison's point of view is that the scale is very low-level, and some of the exploits mostly deal with communication between two contemporary processes in a way that matters for some versions of factored cognition and not others (but matters a lot for private keys on public servers). Would it even be useful for parallel executions of the question-answerer to use this sort of exploit as a shared buffer?

[That is, there's a clearly seen direct cost to algorithms that don't use a shared buffer over algorithms that do; this question is closer to "can we estimate the unseen cost of having to be much more stringent in our other assumptions to eliminate hidden shared buffers?".]