I haven't seen much discussion of this, but it seems like an important factor in how well AI systems deployed by actors with different goals manage to avoid conflict (cf. my discussion of equilibrium and prior selection problems here).

For instance, would systems be trained

  • Against copies of agents developed by other labs (possibly with measures to mask private information)?
  • Simultaneously with other agents in a simulator that each developer has access to?
  • Against copies of themselves?
  • Against distributions of counterpart policies engineered to have certain properties? What would those properties be?
New Answer
New Comment

2 Answers sorted by

Daniel Kokotajlo

Dec 07, 2020


My guess is that at some point we'll transition away from this "First we train, then we deploy" paradigm to a paradigm where systems are continually learning on the job. My guess is that insofar as powerful AIs play a role in a multipolar scenario, they'll be in this second paradigm. So in a sense they'll be learning from each other, though perhaps early in their training (i.e. prior to deployment) they were trained against copies of themselves or something. Unfortunately I doubt your case #1 will happen, unless we advocate strongly for it. I think by the time these agents are this powerful, their code will be closely guarded. These are all just guesses though, I think other scenarios are certainly plausible also.

Makes sense. Though you could have deliberate coordinated training even after deployment. For instance, I'm particularly interested in the question of "how will agents learn to interact in high stakes circumstances which they will rarely encounter?" One could imagine the overseers of AI systems coordinating to fine-tune their systems in simulations of such encounters even after deployment. Not sure how plausible that is though.

1Daniel Kokotajlo3y
I totally agree it could be done, I'm just saying I think it won't happen without special effort on our part, probably. Rivals are suspicious of each other, and would probably be suspicious of a proposal like this coming from their rival. If they are even concerned about the problem it is trying to fix at all.


Dec 10, 2020


Some off-the-cuff thoughts:

It seems plausible that transformative agents will be trained exclusively on real-world data (without using simulated environments) [EDIT: in "data" I mean to include the observation/reward signal from the real-world environment in an online RL setup]; including social media feed-creation algorithms, and algo-trading algorithms. In such cases, the researchers don't choose how to implement the "other agents" (the other agents are just part of the real-world environment that the researchers don't control).

Focusing on agents that are trained on simulated environments that involve multiple agents: For a lab to use copies of other labs' agents, the labs will probably need to cooperate (or some other process that involves additional actors may need to exist). In any case, using copies of the agent that is being trained (i.e. self-play) seems to me very plausible. (Like, I think both AlphaZero and OpenAI Five were trained via self-play and that self-play is generally considered to be a very prominent technique for RL-in-simulated-environments-that-involve-multiple-agents).