Search for a command to run...
While deep reinforcement learning excels at solving tasks where large amounts\nof data can be collected through virtually unlimited interaction with the\nenvironment, learning from limited interaction remains a key challenge. We\nposit that an agent can learn more efficiently if we augment reward\nmaximization with self-supervised objectives based on structure in its visual\ninput and sequential interaction with the environment. Our method,\nSelf-Predictive Representations(SPR), trains an agent to predict its own latent\nstate representations multiple steps into the future. We compute target\nrepresentations for future states using an encoder which is an exponential\nmoving average of the agent's parameters and we make predictions using a\nlearned transition model. On its own, this future prediction objective\noutperforms prior methods for sample-efficient deep RL from pixels. We further\nimprove performance by adding data augmentation to the future prediction loss,\nwhich forces the agent's representations to be consistent across multiple views\nof an observation. Our full self-supervised objective, which combines future\nprediction and data augmentation, achieves a median human-normalized score of\n0.415 on Atari in a setting limited to 100k steps of environment interaction,\nwhich represents a 55% relative improvement over the previous state-of-the-art.\nNotably, even in this limited data regime, SPR exceeds expert human scores on 7\nout of 26 games. The code associated with this work is available at\nhttps://github.com/mila-iqia/spr\n