dreamerv2 and dreamerv3
DreamerV3 is the successor to DreamerV2, extending discrete world models to diverse continuous control domains beyond Atari, making V2 largely superseded for new projects though both remain available implementations of the same algorithmic lineage.
About dreamerv2
danijar/dreamerv2
Mastering Atari with Discrete World Models
This project helps reinforcement learning researchers and practitioners train agents that can master complex tasks, particularly in simulated environments like Atari games or robotic control. You provide the environment's visual observations, and it outputs a highly skilled agent capable of achieving human-level or better performance. It's designed for those developing or evaluating advanced AI agents.
About dreamerv3
danijar/dreamerv3
Mastering Diverse Domains through World Models
This project offers a reinforcement learning algorithm that helps train AI agents to master a wide array of complex control tasks, from playing games to robot navigation. You provide data from various simulated or real-world interactions, and the system outputs a highly optimized policy for the agent's behavior. This is ideal for AI researchers and engineers working on autonomous systems or generalized AI.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work