RLE-Foundation/RLeXplore
RLeXplore provides stable baselines of exploration methods in reinforcement learning, such as intrinsic curiosity module (ICM), random network distillation (RND) and rewarding impact-driven exploration (RIDE).
This toolkit helps researchers in reinforcement learning (RL) accelerate their work by providing a standardized way to compare and implement intrinsic reward algorithms. It takes various intrinsic reward module configurations as input and outputs benchmarked results and well-structured code for easy integration into RL projects. This is specifically designed for RL researchers and practitioners focusing on exploration in complex environments.
459 stars. No commits in the last 6 months.
Use this if you are an RL researcher developing or evaluating new intrinsically-motivated exploration methods and need a consistent, reliable framework for implementation and comparison.
Not ideal if you are looking for a general-purpose reinforcement learning library without a specific focus on intrinsic exploration rewards or a beginner seeking a high-level API for basic RL tasks.
Stars
459
Forks
23
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RLE-Foundation/RLeXplore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DLR-RM/stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
google-deepmind/dm_control
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning...
Denys88/rl_games
RL implementations
pytorch/rl
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
yandexdataschool/Practical_RL
A course in reinforcement learning in the wild