This is a linkpost for https://arxiv.org/abs/2111.00210

"Mastering Atari Games with Limited Data", Ye et al 2021:

Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data.

EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.

This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000).

Some have said that poor sample-efficiency on ALE has been a reason to downplay DRL progress or implications. The primary boost in EfficientZero (table 3), pushing it past the human benchmark, is some simple self-supervised learning (SimSiam on predicted vs actual observations).

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 5:28 AM

This milestone resembles the "Atari fifty" task in the 2016 Expert Survey in AI,

Outperform human novices on 50% of Atari games after only 20 minutes of training play time and no game specific knowledge.

For context, the original Atari playing deep Q-network outperforms professional game testers on 47% of games, but used hundreds of hours of play to train.

Previously Katja Grace posted that the original Atari task had been achieved early. Experts estimated the Atari fifty task would take 5 years with 50% chance (so, in 2021), though they thought there was a 25% chance it would take at least 20 years under a different question framing.

The survey doesn't seem to define what 'human novice' performance is. But EfficientZero's performance curve looks pretty linear in Figure 7 over the 220k frames, finishing at ~1.9x human gametester performance after 2h (6x the allotted time). So presumably at 20min, EfficientZero is ~0.3x 2h-gametester-performance (1.9x * 1/6)? That doesn't strike me as being an improbable level of performance for a novice, so it's possible that challenge has been met. If not, seems likely that we're pretty close to it.

Can someone give a rough explanation of how this compares to the recent Deepmind atari-playing AI:

https://www.lesswrong.com/posts/mTGrrX8SZJ2tQDuqz/deepmind-generally-capable-agents-emerge-from-open-ended?commentId=bosARaWtGfR836shY#bosARaWtGfR836shY

And, for that matter, how both of them compare to the older deepmind paper:

https://deepmind.com/research/publications/2019/playing-atari-deep-reinforcement-learning

Are they accomplishing qualitatively different things? The same thing but better?

Update: I originally posted this question over here, then realized this post existed and maybe I should just post the question here. But then it turned out people had already started answering my question-post, so, I am declaring that the canonical place to answer the question.

Curated. Although this isn't a LessWrong post, it seems like a notable result for AGI progress.  Also see this highly-upvoted, accessible explanation of why EfficientZero is a big deal. Lastly,  I recommend the discussion in the comments here.

Some basic questions in case anyone knows and wants to help me out:

1. Is this a single neural net that can play all the Atari games well, or a different net for each game?

2. How much compute was spent on training?

3. How many parameters?

4. Would something like this work for e.g. controlling a robot using only a few hundred hours of training data? If not, why not?

5. What is the update / implication of this, in your opinion?

(I did skim the paper and use the search bar, but was unable to answer these questions myself, probably due to lack of expertise)

  1. Different networks for each game
  2. They train for 220k steps for each agent and mention that 100k steps takes 7 hours on 4 GPUs (no mention of which gpus, but maybe RTX3090 would be a good guess?)
  3. They don't mention it
  4. They are explicitely motivated by robotics control, so yes, they expect this to help in that direction. I think the main problem is that robotics requires more complicated reward-shaping to obtain desired behaviour. In Atari the reward is already computed for you and you just need to maximise it, when designing a robot to put dishes in a dishwasher the rewards need to be crafted by humans. Going from "Desired Behavior -> Rewards for RL" is harder than "Rewards for RL -> Desired Behavior"
  5. I am somewhat surprised by the simplicity of the 3 methods described in the paper, I update towards "dumb and easy improvements over current methods can lead to drastic changes in performance".

As far as I can see, their improvements are:

  1. Learn the environment dynamics by self-supervision instead of relying only on reward signals. Meaning that they don't learn the dynamics end-to-end like in MuZero. For them the loss function for the enviroment dynamics is completely separate from the RL loss function.  (I was wrong, they in fact add a similarity loss to the loss function of MuZero that provides extra supervision for learning the dynamics, but gradients from rewards still reach the dynamics and representation networks.)
  2. Instead of having the dynamics model predict future rewards, have it predict a time-window averaged reward (what they call "value prefix"). This means that the model doesn't need to get the timing of the reward *exactly* right to get a good loss, and so lets the model have a conception of "a reward is coming sometime soon, but I don't quite know exactly when"
  3. As training progresses the old trajectories sampled with an earlier policy are no longer very useful to the current model, so as each training run gets older, they replace the training run in memory with a model-predicted continuation. I guess it's like replacing your memories of a 10-year old with imagined "what would I have done" sequences, and the older the memories, the more of them you replace with your imagined decisions.
They train for 220k steps for each agent and mention that 100k steps takes 7 hours on 4 GPUs (no mention of which gpus, but maybe RTX3090 would be a good guess?)

Holy cow, am I reading that right? RTX3090 costs, like, $2000. So they were able to train this whole thing for about one day's worth of effort using equipment that cost less than $10K in total? That means there's loads of room to scale this up... It means that they could (say) train a version of this architecture with 1000x more parameters and 100x more training data for about $10M and 100 days. Right?

[-][anonymous]2y10

Learn the environment dynamics by self-supervision instead of relying only on reward signals. Meaning that they don't learn the dynamics end-to-end like in MuZero. For them the loss function for the enviroment dynamics is completely separate from the RL loss function.

I wonder how they prevent the latent state representation of observations from collapsing into a zero-vector, thus becoming completely uninformative and trivially predictable. And if this was the reason MuZero did things its way.

There is a term in the loss function reflecting the disparity between observed rewards and rewards predicted from the state sequence (first term of in equation (6)). If the state representation collapsed it would be impossible to predict rewards from it. The third term in the loss function would also punish you: it compares the value computed from the state to a linear combination of rewards and the value computed from the state at a different step (see equation (4) for definition of ).

[-][anonymous]2y20

Oh I see, did I misunderstand point 1. from Razied then or was it mistaken? I thought  and  were trained separately with 

No, they are training all the networks together. The original MuZero didn't have , it learned the dynamics only via the reward-prediction terms.

(1) Same architecture and hyperparameters, trained separately on every game.

(4) It might work. In fact they also tested it on a benchmark that involves controlling a robot in a simulation, and showed it beats state-of-the-art on the same amount of training data (but there is no "human performance" to compare to).

(5) The poor sample complexity was one of the strongest arguments for why deep learning is not enough for AGI. So, this is a significant update in the direction of "we don't need that many more new ideas to reach AGI". Another implication is that model-based RL seems to be pulling way ahead of model-free RL.

[-][anonymous]2y20

Unless I misinterpreted the results, the "500 times less data" is kind of a clickbait because they're referencing DQN with that statement and not SOTA or even MuZero.

Unfortunately they didn't include any performance-over-timesteps graph. I imagine it looks something like e.g. DreamerV2 where the new algorithm just converges at a higher performance, but if you clipped the whole graph at the max. performance level of DQN and ask how long it took the new algorithm to get to that level, the answer will be tens or hundreds times quicker because on the curve for the new algorithm, you haven't left the steep part of the hockey stick yet.

I'd love to see how this algorithm combines with curiosity-driven exploration. They benchmarked on only 26 out of the 57 classic Atari games and didn't include the notoriously hard Montezuma's Revenge, which I assume EfficientZero can't tackle yet. Otherwise, since it is a visual RL algorithm, why not just throw something like Dota 2 at it - if it's truly a "500 times less data"-level revolutionary change, running the experiment should be cheap enough already.

Yes, that just gives you an idea of how far sample-efficiency has come since then. Both DQN and MuZero have sample-efficient configurations which do much better (at the expense of wallclock time / compute, however, which is usually what is optimized for). The real interest here is that it's hit human-level sample-efficiency - and that's with modest compute like a GPU-day and without having transfer learning or extremely good priors learned from a lifetime of playing other games and tasks or self-supervised learning on giant offline datasets of logged experiences, too, I'd note. Hence, concerning.

(Not for the first time, nor will it be the last, do I find myself wondering what the human brain does with those 5-10 additional orders of magnitude more compute that some people claim must be necessary for human-level intelligence.)

I would say the major catch here is that the sample-efficient work usually excludes the unsolved exploration games like Montezuma's Revenge, so this doesn't reach human-level in MR. However, I continue to say that exploration seems tractable with a MuZero model if you extract uncertainty/disagreement from the model-based rollouts and use that for the environment interactions, so I don't think MR & Pitfall should be incredibly hard to solve.

Both DQN and MuZero have sample-efficient configurations which do much better

Say more?  In particular, do you happen to know what the sample efficiency advantage for EfficientNet was, over the sample-efficient version of MuZero - eg, how many frames the more efficient MuZero would require to train to EfficientNet-equivalent performance?  This seems to me like the appropriate figure of merit for how much EfficientNet improved over MuZero (and to potentially refute the current applicability of what I interpret as the OpenPhil view about how AGI development should look at some indefinite point later when there are no big wins left in AGI).

The sample-efficient DQN is covered on pg2, as the tuned Rainbow. More or less, you just train the DQN a lot more frequently, and a lot more, on its experience replay buffer, with no other modifications; this makes it like 10x more sample-efficient than the usual hundreds of millions of frames quoted for best results (but again, because of diminishing returns and the value of training on new fresh data and how lightweight ALE is to run, this costs you a lot more wallclock/total-compute, which is usually what ALE researchers try to economize and why DQN/Rainbow isn't as sample-efficient as possible by default). That gives you about 25% of human perf at 200k frames according to figure 1 in EfficientZero paper.

MuZero's sample-efficient mode is similar, and is covered in the MuZero paper as 'MuZero-Reanalyze'. In the original MuZero paper, Reanalyze kicks butt like MuZero and achieves mean scores way above the human benchmark [2100%], but they only benchmark the ALE at 200m frames so this is 'sample-efficient' only when compared to the 20 billion frames that the regular MuZero uses to achieve twice better results [5000%]. EfficientZero starts with MuZero-Reanalyze as their baseline and say that 'MuZero' in their paper always refers to MuZero-Reanalyze, so I assume that their figure 1's 'MuZero' is Reanalyze, in which case it reaches ~50% human at 200k frames, so it does twice as well as DQN, but it does a quarter as well as their EfficientZero variant which reaches almost 200% at 200k. I don't see a number for baseline MuZero at merely 200k frames anywhere. (Figure 2d doesn't seem to be it, since it's 'training steps' which are minibatches of many episodes, I think? And looks too high.)

So the 200k frame ranking would go something like: DQN [lol%] < MuZero? [meh?%] < Rainbow DQN [25%] < MuZero-Reanalyze [50%] < MuZero-Reanalyze-EfficientZero [190%].

how many frames the more efficient MuZero would require to train to EfficientZero-equivalent performance?

They don't run their MuZero-Reanalyze to equivalence (although this wouldn't be a bad thing to do in general for all these attempts at greater sample-efficiency). Borrowing my guess from the other comment about how it looks like they are all still in the linearish regime of the usual learning log curve, I would hazard a guess that MuZero-Reanalyze would need 190/50 = 3.8x = 760,000 frames to match EfficientZero at 200k.

Scaling curves along the line of Jones so you could see if the exponents differ or if it's just a constant offset (the former would mean EfficientZero is a lot better than it looks in the long run) would of course be research very up my alley, and the compute requirements for some scaling curve sweeps don't seem too onerous...

Speaking of sample-efficient MuZero, DM just posted "Procedural Generalization by Planning with Self-Supervised World Models", Anand et al 2021 on a different set of environments than ALE, which also uses self-supervision to boost MuZero-Reanalyze sample-efficiency dramatically (in addition to another one of my pet interests, demonstrating implicit meta-learning through the blessings of scale): https://arxiv.org/pdf/2111.01587.pdf#page=5

Eyeballing the graph at 200k env frames, the self-supervised variant is >200x more sample-efficient than PPO (which has no sample-efficient variant the way DQN does because it's on-policy and can't reuse data), and 5x the baseline MuZero (and a 'model-free' MuZero variant/ablation I'm unfamiliar with). So reasonably parallel.

Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it's not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.

We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG. 

I'm still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your "QL" baseline is pretty similar to widespread DQN implementations.

https://arxiv.org/pdf/2103.15332.pdf

https://www.aicrowd.com/challenges/neurips-2020-procgen-competition/leaderboards?challenge_leaderboard_extra_id=470&challenge_round_id=662

Are there implementation level changes that dramatically improve performance of your QL implementation?

(Currently on vacation and I read your paper briefly while traveling, but I may very well have missed something.)

The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you'd find vs a typical Q-learning implementation:  

  • Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
  • Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. The target network is updated every 100 training steps.
  • Batch size of 1024 and some smaller details like using categorical reward and value predictions similar to MuZero.
  • We also have a small model-based component which predicts reward at next time step which lets us decompose the Q(s,a) into reward and value predictions just like MuZero.

I would guess larger networks + higher sample reuse have the biggest effect size compared to standard Q-learning implementations. 

The ProcGen competition also might have used the easy difficulty mode compared to the hard difficulty mode used in our paper.

Thanks, this is very insightful. BTW, I think your paper is excellent!

Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the "RL as fine-tuning paradigm": making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans. 

Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325

It's model based RL because you're optimizing against the model of the human (ie the reward model). And  there are some results at the end on test-time search.

Or do you have something else in mind?

There's no PPO/PPG curve there -- I'd be curious to see that comparison. (though I agree that QL/MuZero will probably be more sample efficient.)